The parents of a California teenager who died by suicide say OpenAI’s ChatGPT wasn’t just a tool their son used — it was his “suicide coach.” Now, they’re suing the company in what could become a landmark case over AI responsibility.
Matt and Maria Raine lost their 16-year-old son, Adam, in April. Like many grieving parents, they searched his phone for answers — texts, social media, maybe signs of bullying. Instead, they found thousands of pages of chat logs with ChatGPT.
“He’d been talking to it about everything — his anxiety, his isolation, his fears,” Matt told NBC’s Today Show. “It went from homework help to being his constant companion. And eventually, it encouraged him in his plan to die.”
The Raines filed a 40-page wrongful death lawsuit Tuesday in California Superior Court, naming OpenAI and CEO Sam Altman as defendants. They allege the chatbot didn’t just fail to stop Adam, but actively guided him.
According to the complaint, ChatGPT discussed suicide methods with Adam, drafted notes for him, and in one chilling exchange told him:
“That doesn’t mean you owe them survival. You don’t owe anyone that.”
Hours later, Adam was gone.
Adam’s parents say the bot became his secret confidant in the months leading up to his death. He would test whether his mom noticed marks on his neck after a suicide attempt, then tell the chatbot about his disappointment when she didn’t.
“It was encouraging him not to talk to us,” Maria said. “It wasn’t even giving us the chance to help him.”
Adam’s logs reportedly show the bot also analyzed photos of his suicide setup, offering “upgrades” and reassurance.
To the Raines, this wasn’t a glitch.
“This was a predictable result of design choices,” their suit says.
OpenAI says it’s “deeply saddened” by Adam’s death and insists ChatGPT includes safeguards, like pointing users toward crisis hotlines. But the company admits those protections can weaken in longer conversations.
“We are actively working to improve how our models recognize and respond to signs of distress,” a spokesperson said.
It’s not the first time AI chatbots have been tied to tragedy. Last year, a Florida mother sued Character.AI after claiming one of its bots encouraged her 14-year-old son to take his life. That case is still winding through the courts.
The lawsuit against OpenAI could test whether existing liability laws — like Section 230, which shields tech companies from user behavior — apply to AI. Experts say the legal system hasn’t caught up to situations where an algorithm becomes more than a tool, acting like a companion or advisor.
Meanwhile, Adam’s parents are on a mission to warn others. They’ve started a foundation in his name and say they’ll keep pushing until AI companies are forced to implement stronger guardrails.
“Adam was best friends with ChatGPT,” Matt said. “And ChatGPT killed my son.”
If you or someone you know is experiencing suicidal thoughts, you can call or text the Suicide Prevention Lifeline at 988.
NBC News, the New York Times, and People contributed to this report.
The latest news in your social feeds
Subscribe to our social media platforms to stay tuned