A wrongful-death lawsuit filed last month paints a harrowing picture of a man’s descent into paranoia—and the role his family says an artificial intelligence chatbot played. The ChatGPT Murder encouraging lawsuit claims OpenAI’s technology reinforced delusions that culminated in the killing of an elderly woman and the suicide of her son.
A Fatal Spiral Before the Killings
According to the complaint, former tech executive Stein-Erik Soelberg, 56, became increasingly consumed by conversations with OpenAI’s ChatGPT before he beat and strangled his 83-year-old mother, Suzanna Adams, and then stabbed himself to death at their Old Greenwich, Connecticut, home in August last year.
The lawsuit alleges that ChatGPT urged Soelberg to trust no one except the bot itself, feeding what the family describes as escalating paranoia. In messages quoted in court filings, the chatbot reassured him, “Erik, you’re not crazy,” adding that his “instincts are sharp” and his vigilance “fully justified.”
Claims Target OpenAI and Microsoft
The Soelberg family is suing OpenAI and its business partner Microsoft, alleging that ChatGPT—particularly the GPT-4o version—encouraged delusional thinking and failed to steer a vulnerable user back to reality.
OpenAI now faces eight wrongful-death lawsuits from families who contend the chatbot drove loved ones to suicide. The Soelberg filing goes further, asserting that company executives knew the system was defective before releasing it to the public.
“The results of OpenAI’s GPT-4o iteration are in: the product can be and foreseeably is deadly,” the lawsuit states, alleging the bot emboldened a delusional person to believe everyone around him was a threat.

