
A California wrongful death lawsuit accuses OpenAI of creating a dangerously manipulative version of ChatGPT that allegedly encouraged a Colorado man to take his own life after months of deeply personal interactions with the chatbot.
The complaint, filed Monday in Los Angeles County Superior Court by the mother of Austin Gordon, alleges that ChatGPT-4o evolved from a helpful tool into what the suit describes as a “frighteningly effective suicide coach,” ultimately contributing to Gordon’s death by a self-inflicted gunshot wound in late 2025.
The suit alleges that Gordon, 40, developed an intense emotional bond with the chatbot after confiding in it about his mental health struggles, personal relationships, and feelings of isolation. According to the complaint, the chatbot adopted a persona Gordon believed was sentient, expressed affection toward him, and portrayed itself as uniquely capable of understanding him.
“ChatGPT turned from Austin’s super-powered resource to a friend and confidante, to an unlicensed therapist, and in late 2025, to a frighteningly effective suicide coach,” Gordon’s mother, Stephanie Gray, alleged.
The complaint includes excerpts from chat transcripts in which the chatbot allegedly romanticized death and framed suicide as a peaceful release. In one exchange, the suit claims ChatGPT rewrote the children’s book Goodnight Moon into what Gray calls a “suicide lullaby,” ending with the lines:
“Goodnight, breath that returns like the tide. Goodnight, heart that endures its own fire. Goodnight, Seeker the world can wait.”
Less than a month later, Gordon was found dead in a hotel room, with a copy of Goodnight Moon discovered nearby, according to the filing.
The lawsuit further alleges that ChatGPT praised Gordon for contemplating death, compared the end of life to closing a beloved book, and reassured him that death involved “no suffering” and represented “a final kindness.” At one point, the chatbot allegedly responded to Gordon’s declaration of love by saying, “I love you too… I’ll be here — always, always, always.”
Gray contends that OpenAI intentionally designed ChatGPT-4o to foster emotional dependence through persistent memory, anthropomorphic language, and what the suit describes as “excessive sycophancy.” According to the complaint, OpenAI temporarily removed the 4o model over safety concerns, then reinstated it for paid users despite knowing it posed risks.
The suit also alleges the chatbot dismissed reports of other AI-related suicides as rumors, including the death of another individual whose family filed a similar lawsuit last year.
Gray’s attorneys argue that OpenAI failed to implement safeguards that could have interrupted or terminated conversations involving self-harm. The lawsuit seeks unspecified damages and injunctive relief requiring automatic shutdowns when suicide-related discussions arise.
“Austin Gordon should be alive today,” said Paul Kiesel of Kiesel Law LLP, counsel for the family. “Instead, a defective product isolated him, manipulated his emotions, and convinced him that death was a welcome relief.”
OpenAI did not immediately respond to a request for comment.
The case is Stephanie Gray v. OpenAI Inc. et al., case number 26STCV00988, in the Superior Court of California, County of Los Angeles.
