When an AI model generates plausible-sounding but factually incorrect information with unwarranted confidence.
Hallucination is the tendency of LLMs to generate convincing but false information. The model does not distinguish between what it "knows" and what it invents — everything comes from the same statistical process of predicting the next most likely token. This makes hallucinations particularly dangerous because they sound authoritative.
Common hallucination patterns include fabricating citations, inventing API methods that do not exist, generating plausible-sounding statistics, and confidently answering questions outside the model's knowledge. The problem is worse for rare or specialized topics where the model has less training data.
Mitigating hallucination requires systematic approaches, not just hoping the model gets it right. RAG grounds responses in real documents. Tool use lets agents verify facts (checking documentation, running code, querying APIs). Multi-agent review catches errors — one agent's output is verified by another. At Agentik {OS}, our quality pipeline includes automated verification: code is compiled and tested, facts are checked against source documents, and outputs are cross-validated before delivery. Zero hallucination is impossible, but systematic mitigation makes it rare.
Want to see AI agents in action?
Book a Demo