AI Hallucinations: When AI Sees What Isn’t There

0
246

The Austrian-based European privacy advocacy group, None of Your Business (NOYB), took up Holmen’s case, filing a 2025 complaint with Datatilsynet, Norway’s Data Protection Authority. 

NOYB investigated the origins of the fabricated claims, checking if anyone with a similar name had committed serious crimes. Their search yielded nothing, reinforcing that the chatbot’s AI model had fabricated the allegations.

What made the situation even more alarming was that ChatGPT mixed these false claims with accurate personal details about Holmen, including the names and genders of his children and his hometown.

Signup for the USA Herald exclusive Newsletter

 While OpenAI has since updated the AI model, removing the false information, NOYB raised concerns about whether the erroneous data remains embedded within the LLM’s dataset.

How AI Hallucinations Occur

The concept of hallucinations takes on a different meaning in AI.

 According to Geeks for Geeks, “AI hallucination refers to AI systems generating imaginative novel, or unexpected outputs. These outputs frequently exceed the scope of training data.”