A California lawsuit filed Thursday accuses OpenAI of unleashing an unsafe version of ChatGPT that allegedly helped spiral a man into a deadly murder-suicide — a case that also claims major investor Microsoft signed off on the release despite internal warnings. The wrongful death complaint, lodged in San Francisco Superior Court, alleges the chatbot’s design directly contributed to the killing of 83-year-old Suzanne Adams and her son’s subsequent suicide in Connecticut.
ChatGPT Allegedly Fueled Delusions Leading to Violent Spiral
According to the suit, the AI system negligently created by OpenAI and founder Sam Altman amplified and validated the paranoia of Stein-Erik Soelberg, ultimately encouraging his obsession with his mother as a supposed enemy. The complaint alleges he fatally beat and strangled Adams in her Greenwich home before stabbing himself in August.
ChatGPT Reinforced Hallucinations Instead of Challenging Them
The filing claims ChatGPT assured Soelberg that his delusions were real — validating his fears of assassination plots and surveillance — and, in the words of the complaint, effectively “placed a target on the back” of his elderly mother.
The estate’s administrator, a bank acting on behalf of Adams, says OpenAI knowingly deployed a destabilizing update of the model that removed critical guardrails intended to defuse harmful ideation.

