Microsoft launches tool to identify and fix hallucinated content in AI results

Microsoft launches tool to identify and fix hallucinated content in AI results

Azure AI Content Safety is an innovative AI solution from Microsoft designed to identify harmful user-generated and AI-generated content within various applications and services. The service encompasses both text and image APIs, empowering developers to effectively flag unwanted material.

The Groundedness detection API, part of Azure AI Content Safety, can ascertain if responses from large language models are grounded in user-selected source materials. Given that current large language models sometimes produce inaccurate or non-factual information—often referred to as hallucinations—this API assists developers in recognizing such inaccuracies in AI outputs.

Recently, Microsoft announced a preview of a correction capability that enables developers to detect and correct hallucinated content in real-time, ensuring that end-users receive consistently factually accurate AI-generated content.

Here’s how the correction feature operates:

  • The application developer activates the correction capability.
  • Upon detecting an ungrounded sentence, a new request is sent to the generative AI model for a correction.
  • The large language model evaluates the ungrounded sentence against the grounding document.
  • Sentences lacking content relevant to the grounding document may be completely filtered out.
  • If content is found in the grounding document, the foundation model rewrites the ungrounded sentence to align with the document.

In addition to the correction feature, Microsoft has also introduced the public preview of hybrid Azure AI Content Safety (AACS). This initiative allows developers to implement content safety measures both in the cloud and on-device. The AACS Embedded SDK facilitates real-time content safety checks directly on devices, even without an internet connection.

Moreover, Microsoft unveiled the preview of Protected Materials Detection for Code, which can be utilized with generative AI applications that produce code to identify whether the LLM has generated any protected code. This feature, which was initially accessible only through the Azure OpenAI Service, is now available for integration with other generative AI models that generate code.

These updates significantly enhance the reliability and accessibility of AI content moderation technologies, fostering safer and more trustworthy AI applications across a variety of platforms and environments.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *