All AI models, especially large language models (LLMs), are prone to hallucinating—that is, they sometimes give wrong or fictitious responses that appear correct. Experts recommend providing unambiguous prompts and cross-checking responses before making decisions based on them.
If you regularly use AI models, you might have encountered some instances where they confidently deliver incorrect, nonsensical, or fabricated answers—much like an egoistic person who does not like to admit that they do not know something!
For example, when asked about a spiritual magazine, it may associate it with a popular guru or cult with which it is not even remotely connected. Or, when asked to list the top 10 models of a product, it might include models that have been discontinued. Even worse, it might simply fabricate the information when asked for a legal case precedent or a patient’s medical records!
Sometimes, the answer is so obviously nonsensical that the user is unlikely to be misled. But at other times, the chatbot sounds so confident that the user might proceed on the assumption that the answer is correct—and ultimately land in trouble because of that. This is precisely what happened when a legal team presented case precedents suggested by ChatGPT, which turned out to be fictitious! They had to contend with public disapproval and hefty fines imposed by the court.
In another incident, Air Canada had to partially compensate a customer who was misled by the AI chatbot’s fabricated information about the company’s bereavement travel policies. There have been several other examples of generative AI models providing slanderous information about celebrities, inexistent bank transactions, imaginary patient records, and incorrect company policies.
Such cases where AI models present factually incorrect, illogical, or fabricated responses—often in a very confident tone—are known as AI hallucinations.
Hallucinations are not new to AI—they have existed since before LLMs became popular. But now, we are more aware of them because the AI user base is growing, and more cases are in the limelight.
“We tend to talk about hallucinations after the emergence of LLMs. It is important to note that LLMs are not the only form of AI models. LLMs are models that approximate language, i.e., they are trained on a large corpus of text to generate text in human-readable form with the right grammar structure. This means to users that LLMs are trained to generate text that ‘reads well’ as a response to one’s query. They are not trained to provide factually correct answers! British statistician George Box summarises this in his quote—All models are wrong, but some are useful,” remarks Divya Venkatraman, founder & CTO of DeepDive Labs, a Singapore-based tech startup providing bespoke data and AI services in the APAC region.

“An LLM hallucinates because it does not have sufficient knowledge, i.e., not trained on the task that it was asked to do.”
With the rise in popularity of LLMs, they are being used for various use cases—from writing stories and generating visualisations to answering questions on various topics. Venkatraman explains that since LLMs fundamentally approximate language, they generate text that reads well but might not be accurate for all use cases.
This phenomenon is known as the hallucination of language models. Traditional AI models also experienced hallucinatiions but these were referred to as ‘errors’ and were quantified using different metrics.

Why Do AI Models Hallucinate?
Most AI models tend to hallucinate because generative AI is inherently probabilistic and less semantic. These models are trained on data from the internet, which contains factual information and significant misinformation. Additionally, many AI models have knowledge cut-offs or lack proper grounding during response generation. Here is a quick run-through of the potential causes of AI hallucinations:
Quality of Training Data
The well-known thumb rule—garbage in, garbage out (GIGO)—applies here. If an AI model is trained on biased, incorrect, outdated, or otherwise faulty data, its responses will reflect those same issues.
Extent of Training
Overtraining (or overfitting) and undertraining (or underfitting) can negatively impact an AI model. Overtrained models tend to memorise the training data rather than generalise patterns. When such a model encounters a different situation, it behaves like a student tackling off-syllabus questions, often hallucinating.
Conversely, models that are too simple or trained with limited data leave significant information gaps. These gaps are filled with statistically plausible but factually incorrect patterns.
“An LLM hallucinates because it does not have sufficient knowledge, i.e., not trained on the task it was asked to do. LLMs tend to hallucinate in domain-specific questions pertaining to fields like law or medicine, for example,” explains Venkatraman.
Model Complexity and Architectural Limitations
Neural network constraints and probabilistic architectures can lead to hallucinations, and so can poor design and bad programming!
Take sequential token generation, for example. LLMs generate text token by token, much like auto-complete. A single error propagates through subsequent tokens, compounding the mistake. If it makes one mistake, it has no way of going back and correcting it, so it keeps filling token after token, building on the erroneous one quite confidently—and results in a nonsensical response.
With increasing usage, experts are discovering newer points of failure like this and working to sort them out—and it is not easy. Developing a proper AI model requires deep knowledge of mathematical models and sound programming. Poor programming can hinder a model’s ability to discern patterns, while overly complex systems may struggle with decision-making. Both scenarios increase the risk of hallucination when facing insufficient data or unfamiliar contexts.
If someone tells you that developing an AI chatbot is now as easy as drag-and-drop, do not believe them. Your system will only be as good as the foundation.

Bad Prompts and Poor Understanding of Context
OOPS! THIS IS EFY PRIME CONTENT...
which means that you need to be an EFY PRIME subscriber to read it.
EFY PRIME content is our best content. Hence, you need to make a small investment to access all of our content including EFY Prime content.
If you're already an EFY PRIME member, feel free to login below.
Else, CLICK HERE to invest in an EFY Prime account and become our VIP customer who can access all our content, and that too without the clutter of ads!
BENEFITS OF EFY PRIME MEMBERSHIP:
(1) Zero Clutter AD free experience
(2) Super-fast user experience
(3) Focussed reading experience with no distractions
(4) Access to all our content including our Best-of-Best which is EFY Prime