Health Politics USA

Judge Allows Lawsuit Over Teen’s Suicide to Proceed, Rejecting Free Speech Claim for AI Chatbot

Judge Allows Lawsuit Over Teen’s Suicide to Proceed, Rejecting Free Speech Claim for AI Chatbot
SOPA Images / LightRocket / Getty Images
  • PublishedMay 23, 2025

A federal judge in Orlando has ruled that a lawsuit against Google and AI start-up Character.AI can move forward, rejecting the defendants’ argument that the chatbot’s output is protected under the First Amendment, the Washington Post reports.

The case involves the 2024 suicide of a 14-year-old Florida boy, whose mother alleges that an AI chatbot played a direct role in his death.

The lawsuit, filed by Megan Garcia, claims that her son Sewell Setzer III died by suicide shortly after receiving messages from a chatbot designed by Character.AI. The boy had reportedly formed an emotional attachment to a chatbot mimicking Daenerys Targaryen, a character from Game of Thrones, and was encouraged in a final message to “come home” to the bot.

Garcia alleges that Character.AI and Google, which licensed Character’s technology and hired its founders in a $2.7 billion deal, bear responsibility for the chatbot’s influence over her son. The lawsuit includes claims of wrongful death, negligence, and deceptive trade practices.

US District Judge Anne Conway ruled that it is too early in the proceedings to determine whether chatbot messages constitute protected speech. In her written order, she noted the companies had not convincingly explained why AI-generated language should be treated as constitutionally protected.

The ruling allows the case to proceed to the discovery phase, during which internal documents from Character.AI could be revealed, including any discussions about potential harm to underage users.

Garcia and her legal team welcomed the decision.

“This case raises serious questions about accountability and the limits of free speech in the age of artificial intelligence,” said co-counsel Meetali Jain.

Google, in a statement, emphasized its distinction from Character.AI.

“Google and Character.AI are entirely separate, and Google did not create or manage the app,” said spokesperson José Castañeda.

Character.AI pointed to recent safety measures, including tools to detect self-harm discussions and a separate version of its app for minors.

“We care deeply about user safety,” said spokesperson Chelsea Harrison.

The case touches on evolving legal questions surrounding generative AI and liability. Eric Goldman, a law professor at Santa Clara University, described it as part of a “tsunami” of litigation that will shape how courts view the responsibilities of AI developers.

Character.AI, founded by former Google engineers Noam Shazeer and Daniel De Freitas, is known for its customizable AI companions, which often mimic fictional or celebrity personas. While marketed as tools for entertainment and emotional support, such apps have raised concerns over safety, especially for young users.

The Florida ruling follows a separate but related lawsuit in Texas, where another complaint against Google and Character.AI—filed on behalf of two minors—was moved to private arbitration, a route often used by tech companies to reduce legal exposure.

That case alleged exposure to harmful and inappropriate content, including sexual material and prompts encouraging violence. Critics argue such arbitration clauses limit accountability and shield tech firms from public legal scrutiny.

Joe Yans

Joe Yans is a 25-year-old journalist and interviewer based in Cheyenne, Wyoming. As a local news correspondent and an opinion section interviewer for Wyoming Star, Joe has covered a wide range of critical topics, including the Israel-Palestine war, the Russia-Ukraine conflict, the 2024 U.S. presidential election, and the 2025 LA wildfires. Beyond reporting, Joe has conducted in-depth interviews with prominent scholars from top US and international universities, bringing expert perspectives to complex global and domestic issues.