AI hallucinations are a well-documented phenomenon in artificial intelligence (AI). Large Language Models (LLMs) like ChatGPT generate text by predicting the most probable next word in a sequence, but they do not inherently understand the context.
When they fail to find an appropriate response, they sometimes generate misinformation—what is commonly referred to as an AI hallucination.
An AI model hallucinates when it provides false or misleading information. While this might seem harmless or sometimes even amusing, the consequences can be severe, particularly when AI-generated misinformation influences healthcare, legal matters, or personal reputations.
Confused and Falsely Accused
One alarming instance of AI hallucination involved Norwegian citizen Arve Hjalmar Holmen. He searched his name in ChatGPT to see what information OpenAI’s chatbot would provide.
To his horror, ChatGPT falsely claimed that he had murdered two of his children, attempted to kill a third, and had been sentenced to 21 years in prison.