We use cookies

    We use cookies to enhance your browsing experience, analyze our traffic, and provide personalized content. By clicking "Accept All", you consent to our use of cookies. You can manage your preferences or learn more in our Cookie Policy.

    December 2025
    3 min read
    By Kristof Leroux, Founder & CEO
    Technology
    New

    Why Large Language Models Hallucinate and Make Mistakes

    Neural network visualization showing AI hallucinations and errors
    AI
    LLM
    ChatGPT
    Hallucinations
    Machine Learning

    Large Language Models (LLMs) like ChatGPT, Google's Bard, or Meta's Llama have made astonishing advances in generating human-like text. Yet even these powerful AI systems often make mistakes or "hallucinate" information – that is, they sometimes output confident-sounding statements that are factually incorrect or even entirely fabricated.

    What Are "Hallucinations" in LLMs?

    In the context of AI, hallucination refers to a model generating content that appears plausible and confident but is not grounded in truth. In other words, the AI "makes stuff up." This could mean inventing facts, creating fake citations or quotes, or outputting details that were never in its training data.

    OpenAI defines hallucinations as instances where a language model "confidently generates an answer that isn't true," often giving a detailed response that just isn't based on reality.

    Why Do LLMs Hallucinate? Technical Reasons

    Hallucinations stem from how LLMs are built and trained. These models learn to generate text by predicting the most likely next word (or token) in a sequence, based on patterns learned from huge amounts of data.

    Next-Token Prediction vs. Truth: During training, an LLM sees countless examples of text and learns statistical regularities. Crucially, it doesn't learn true vs. false – it only learns what words tend to follow each other.

    Lack of Grounding in Reality: An LLM does not have an external knowledge base or direct connection to the world; it relies entirely on patterns in text.

    Rare or Unpredictable Information: Language models excel at learning frequent patterns but for long-tail facts or highly specific information, there may be no clear pattern to learn.

    Real-World Examples of LLM Hallucinations

    Google's Bard Chatbot: In February 2023, Bard claimed that JWST "took the very first pictures of a planet outside our solar system." This was factually incorrect – the first images of an exoplanet were taken by a different telescope years earlier.

    ChatGPT Inventing Legal Cases: In 2023, a lawyer in New York admitted to using ChatGPT to help write a court filing. ChatGPT fabricated several case citations – it produced references to past court decisions that simply did not exist.

    Airline Chatbot Making Up Policy: Air Canada's customer service chatbot cited a non-existent company policy to a customer about refunds.

    How Are Hallucinations Being Mitigated?

    Retrieval-Augmented Generation (RAG): One effective approach is to ground the AI's responses in real data by giving it access to reference materials at query time.

    Fine-Tuning and Domain Adaptation: Fine-tuning an LLM on high-quality, domain-specific data can reduce errors in that domain.

    Encouraging "Know-when-you-don't-know": A straightforward but powerful mitigation is to explicitly allow or train the model to say it doesn't know or to refuse to answer when uncertain.

    Citing Sources and Verification: To increase transparency and accuracy, some AI systems now provide citations or ask the model to verify its statements.

    Conclusion

    Hallucinations in large language models are a significant challenge, but not an insurmountable one. They arise from the fundamental nature of how these AI systems work, but through a combination of smarter training, model enhancements, and user-side safeguards, their frequency can be reduced.

    As the saying goes in the AI world: "Trust, but verify." These systems are incredibly useful, but a healthy dose of skepticism and verification will remain essential as we navigate the era of AI-generated information.

    Need help with your valuations?

    Contact our team of experts to discuss your business valuation needs.