Do you know the world record for crossing the English Channel on foot? Or the last time someone carried the Golden Gate Bridge across Egypt?
These are, without a doubt, ridiculous questions. Still, when users asked chatbots for the answers, the models produced legitimate-sounding responses. These answers are AI hallucinations, a massive problem within text created by artificial intelligence. A recent addition to Microsoft’s AI tool aims to curb the hallucination problem by fact-checking AI text.
More Reliable Documents With the Corrections Tool
Of course, no one has ever walked across the English Channel. However, when businesses use natural language processing and generative AI tools for critical functions, there’s no room for factual mistakes.
Microsoft’s AI tool is an innovative and effective means of addressing mistakes. Corrections review chatbot responses to identify questionable information, then cross-reference it against known reliable sources to confirm or revise accordingly.
Introducing fact-checking capabilities into commonly used tools is a way of addressing the inherent limitations of AI models. Chatbots don’t actually “know” anything but rely on their training to produce responses using word patterns and predictions of which word comes next. This can mean results ranging from spot-on to complete nonsense.
Microsoft’s AI tool makes real-time machine learning corrections to reduce the likelihood of misleading, inaccurate, or dishonest text. If it detects details that appear wrong, it automatically cross-references them with user-provided grounding documents and makes text revisions. The result, theoretically, is a more trustworthy document.
You Still Need To Review Documents for Accuracy
Although Microsoft AI advancements like Corrections are a step in the right direction, they’re imperfect. Even Microsoft admits in the tool’s announcement that its purpose is to better align AI-generated text with grounding documents, but it does not guarantee its accuracy. If the material used to train the chatbot contains errors, Corrections will not identify or correct them.
Because of these limitations, experts warn that Microsoft’s AI tool, and others like it, can give users a false sense of security. Relying on computer-generated content that contains bad information can have dire consequences, so humans must review the text to ensure its veracity.
Preserving Confidentiality in AI-Generated Text
Privacy and confidentiality are concerns when creating documents in the enterprise environment, especially when supplying grounding documents for fact-checking. Evaluations are another new tool that proactively assesses risk and provides confidential interference, ensuring sensitive information remains secure and private while chatbots do their work.
Microsoft’s AI tools are part of the company’s push to improve trust in machine learning and increase its adoption. Factual errors, misinformation, and deep fakes create widespread mistrust of the technology. Improving accuracy and reliability is a priority, with billions of dollars at stake.
The Corrections tool comes baked into Azure’s AI Safety API. It works with all current machine learning models, including the latest version of ChatGPT and Meta’s Llama. Users get groundedness checks for up to 5,000 text records monthly before incurring an extra charge.