Hallucination
Hallucination is when an AI model states something that sounds plausible but is wrong or made up, often because it has no ground truth to rely on.
In Simple Terms
Think of it as a very confident person who sometimes fills in gaps with plausible-sounding guesses.
Detailed Explanation
Models generate text by predicting what comes next; they do not know facts. So they can invent names, dates, or citations. RAG, grounding, and careful prompting reduce but do not eliminate hallucination. When it matters: in legal, medical, or financial contexts where errors have real consequences. Common mistakes: trusting long outputs without verification, or assuming newer models never hallucinate.
Related Terms
Artificial Intelligence
The simulation of human intelligence processes by machines, especially computer systems.
Read moreMachine Learning
A subset of AI that enables systems to learn and improve from experience without being explicitly programmed.
Read moreBias in AI
Bias in AI is systematic error or unfairness in how a model treats individuals or groups, often reflecting skewed data or flawed design. It can worsen existing inequalities if left unchecked.
Read moreWant to Implement AI in Your Business?
Let's discuss how these AI concepts can drive value in your organization.
Schedule a Consultation