Loading pattern...

What is Hallucination (AI)?

AI Hallucination is when an AI model generates information that sounds plausible but is factually wrong or completely made up. ChatGPT invents a court case that doesn't exist. Claude cites a research paper that was never published. The AI presents fiction as fact with total confidence. Hallucinations happen because models predict what sounds right, not what is true. Critical problem for AI in production—always verify AI outputs, especially facts, citations, or code.

When Should You Use This?

You don't "use" hallucinations—you guard against them. Assume all AI outputs could be hallucinated. Use techniques to reduce them: provide source material (RAG), ask AI to cite sources, use lower temperature for factual tasks, break complex tasks into steps, or add verification steps (e.g., code execution, fact-checking). Critical for customer-facing AI, legal/medical applications, or anything where accuracy matters.

Common Mistakes to Avoid

  • Trusting AI blindly—always verify facts, citations, calculations, code
  • No source grounding—without RAG or provided context, AI fills gaps with hallucinations
  • High temperature for facts—use temperature 0.1-0.3 for factual tasks
  • Complex tasks—AI hallucinates more on multi-step reasoning, break it down
  • No human review—hallucinations are confident, only humans catch them

Real-World Examples

  • ChatGPT lawyer cited fake court cases—led to sanctions
  • AI history tutor invented historical events that never happened
  • Code generation—AI creates functions that don't exist in libraries
  • Customer support AI—makes up company policies that don't exist

Category

Ai Vocabulary

Tags

hallucinationai-reliabilityllmprompt-engineeringai-safety

Permalink