AI Hallucination is when an AI model generates information that sounds plausible but is factually wrong or completely made up. ChatGPT invents a court case that doesn't exist. Claude cites a research paper that was never published. The AI presents fiction as fact with total confidence. Hallucinations happen because models predict what sounds right, not what is true. Critical problem for AI in production—always verify AI outputs, especially facts, citations, or code.
You don't "use" hallucinations—you guard against them. Assume all AI outputs could be hallucinated. Use techniques to reduce them: provide source material (RAG), ask AI to cite sources, use lower temperature for factual tasks, break complex tasks into steps, or add verification steps (e.g., code execution, fact-checking). Critical for customer-facing AI, legal/medical applications, or anything where accuracy matters.
Ai Vocabulary
When AI makes things up