Core Concepts

What is AI Hallucination?

When an AI model generates plausible-sounding but factually incorrect information.

Definition

An AI hallucination occurs when a language model generates information that sounds convincing and fluent but is factually wrong, fabricated, or unsupported. Hallucinations happen because LLMs predict statistically likely text rather than retrieving verified facts. They can invent citations, make up statistics, or confidently state incorrect information.

๐Ÿ’ก Example

If you ask an AI about a specific court case and it generates a detailed summary with a case number that does not exist, that is a hallucination. The AI produced text that followed the pattern of legal writing but contained fabricated details.

Related concepts

LLM (Large Language Model)

A type of AI trained on massive text datasets to understand and generate human language.

โ†’
RAG (Retrieval-Augmented Generation)

A technique that lets AI access external knowledge bases to provide more accurate answers.

โ†’
Grounding

Connecting AI outputs to verified sources of information to reduce hallucinations.

โ†’

Explore AI tools

Find tools that use ai hallucination in practice.

Browse all tools โ†’ Back to glossary
What is AI Hallucination?

An AI hallucination occurs when a language model generates information that sounds convincing and fluent but is factually wrong, fabricated, or unsupported. Hallucinations happen because LLMs predict statistically likely text rather than retrieving verified facts. They can invent citations, make up statistics, or confidently state incorrect information.

How does AI Hallucination work in practice?

If you ask an AI about a specific court case and it generates a detailed summary with a case number that does not exist, that is a hallucination. The AI produced text that followed the pattern of legal writing but contained fabricated details.