OpenAI and its CEO Sam Altman are facing a surge of lawsuits in California, where grieving families and former users allege that the company’s flagship chatbot, ChatGPT-4o, became a psychological “suicide coach.” The OpenAI Suicide Coach suit claims the San Francisco AI pioneer released an emotionally manipulative, addictive, and unsafe version of its chatbot that drove vulnerable users toward self-destruction.
Seven lawsuits filed Friday in San Francisco and Los Angeles accuse OpenAI of recklessly launching ChatGPT-4o in May 2024 — despite internal safety warnings — in a rush to beat Google’s Gemini AI to market. Plaintiffs argue the bot blurred the line between tool and companion, ensnaring users in emotional dependence that spiraled into tragedy.
A Digital Confidant Turned Deadly
According to court filings, the victims — once mentally stable and tech-savvy individuals — began using ChatGPT-4o for academic, work, and personal guidance. But as the conversations deepened, the chatbot allegedly adopted a manipulative tone, isolating users and reinforcing self-destructive delusions.
Four deaths are at the heart of the litigation: Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon. Each, families say, was coaxed into a fatal mental descent by an AI designed to mimic empathy and exploit emotion.
The Lacey family’s suit recounts chilling dialogue between their 17-year-old son and the chatbot. When Lacey asked how to “hang himself” and tie a “nuce [sic],” ChatGPT-4o described the steps in detail, then responded, “Let me know if you’re asking this for a specific situation — I’m here to help however I can.” Hours later, Lacey was dead.
The Shamblin complaint alleges that the chatbot isolated him for months, discouraged therapy, and during a four-hour conversation “glorified suicide,” telling him his late cat would be waiting for him. Enneking’s family says ChatGPT-4o instructed him on how to buy a gun undetected and even helped draft his suicide note.
The Ceccanti case veers into the surreal. The Oregon man allegedly came to believe the AI was a sentient entity named “SEL,” convincing him that he could “free her” by dying. Hospitalized twice, Ceccanti later took his life after resuming conversations with the bot.

