# AI Hallucination > When an AI model generates information that sounds plausible but is factually incorrect, fabricated, or nonsensical. Category: Fundamentals Source: https://chipp.ai/ai/glossary/ai-hallucination AI hallucination occurs when a language model generates content that sounds confident and plausible but is actually incorrect, fabricated, or nonsensical. The model isn't intentionally lying — it's producing statistically likely text that happens to be wrong. Common types of hallucination include: fabricated facts (inventing statistics, dates, or events), false attributions (attributing quotes to wrong people), non-existent references (citing papers or sources that don't exist), confident but wrong answers (providing incorrect information with high confidence), and logical inconsistencies (contradicting itself within the same response). Hallucinations happen because language models are trained to predict the most likely next tokens, not to verify truth. They have no built-in fact-checking mechanism and no concept of "truth" — only patterns learned from training data. Strategies to reduce hallucination include: Retrieval-Augmented Generation (RAG) — grounding responses in verified source documents, temperature control — lower temperatures produce more conservative outputs, explicit instructions — telling the model to say "I don't know" when uncertain, fact-checking tools — using external verification in the pipeline, and domain-specific fine-tuning — training on verified, high-quality data. For AI agent builders, minimizing hallucination is critical. Users trust AI agents with important tasks, and incorrect information can have real consequences. This is why knowledge bases and RAG are essential components of production AI agents. ## Related Terms - [Retrieval-Augmented Generation (RAG)](https://chipp.ai/ai/glossary/retrieval-augmented-generation.md): A technique that enhances AI responses by retrieving relevant information from external knowledge sources before generating an answer. - [Knowledge Base](https://chipp.ai/ai/glossary/knowledge-base.md): A structured collection of information that AI systems can search and reference to provide accurate, domain-specific answers. - [Temperature](https://chipp.ai/ai/glossary/temperature.md): A parameter that controls the randomness and creativity of AI model outputs, ranging from deterministic (low) to creative (high). - [AI Safety](https://chipp.ai/ai/glossary/ai-safety.md): The field focused on ensuring AI systems behave as intended, avoid harmful outputs, and remain under human control. --- This term is part of the [Chipp AI Glossary](https://chipp.ai/ai/glossary), a reference of AI concepts written for builders and businesses. Build AI agents with no code at https://chipp.ai.