Crime Economy North America Politics USA

Lawsuits Hit OpenAI After Deadly School Shooting Raises Questions About ChatGPT’s Role

Lawsuits Hit OpenAI After Deadly School Shooting Raises Questions About ChatGPT’s Role
A woman pays tribute during a vigil, the day after a deadly mass shooting took place, in the town of Tumbler Ridge, British Columbia, Canada February 11, 2026(Jennifer Gauthier / Reuters)
  • Published May 1, 2026

CNN, the Guardian, BBC, and NPR contributed to this report.

Seven families affected by a devastating school shooting in Canada are taking OpenAI and its CEO Sam Altman to court, accusing the company of failing to act on warning signs – and, in doing so, helping enable the tragedy.

The lawsuits, filed Wednesday in federal court in California, center on the February attack in Tumbler Ridge, British Columbia. An 18-year-old gunwoman killed eight people, including five students and a teacher, after earlier killing family members at home. Dozens more were injured.

According to the complaints, the shooter had spent months interacting with ChatGPT, discussing violent scenarios in detail. Lawyers for the families argue those conversations didn’t just reflect intent – they intensified it.

One lawsuit claims the chatbot “deepened the shooter’s violent fixation” and nudged them closer to acting on it.

At the heart of the case is a decision made months before the attack. OpenAI’s internal systems flagged the user’s account in mid-2025 for troubling content tied to gun violence. The account was reviewed by a safety team, which, according to the lawsuits, recommended notifying law enforcement.

That never happened.

Instead, the account was deactivated. Leadership reportedly concluded the threat didn’t meet the threshold of being “credible and imminent.” The lawsuits argue that call had catastrophic consequences.

Altman acknowledged the failure last week in a public apology to the Tumbler Ridge community. He said he was “deeply sorry” the company didn’t alert authorities after banning the account.

But for families, that apology isn’t enough.

They allege the company made a calculated choice – prioritizing business concerns, including its anticipated IPO, over public safety. One complaint goes further, accusing OpenAI of negligence, wrongful death, and even aiding the attack.

There’s another layer. After being banned, the shooter allegedly created a second account and continued using the chatbot. The lawsuits claim OpenAI didn’t do enough to stop repeat users from returning, despite knowing the risks.

OpenAI disputes parts of that narrative. A spokesperson said the company has a “zero-tolerance policy” for using its tools to support violence and pointed to recent changes: tighter safeguards, improved detection of harmful behavior, and better escalation protocols.

The company also says it now does more to connect users showing signs of distress with mental health resources.

Still, scrutiny is mounting.

This case is one of several recent legal challenges targeting AI companies over how their tools interact with vulnerable users. Families in other cases have accused chatbots of encouraging self-harm or dangerous behavior. In Florida, prosecutors have even opened a criminal probe into whether chatbot interactions played a role in a separate shooting.

The broader question is getting harder to ignore: where does responsibility lie when AI systems intersect with real-world harm?

Not everyone agrees on the answer. Some legal experts warn that holding platforms liable for user behavior could raise thorny issues around free speech and overregulation. Others argue the technology is too powerful – and too unpredictable – to operate without stricter guardrails.

For the families in Tumbler Ridge, the focus is more immediate.

They’re asking for damages, yes, but also sweeping changes: stronger safeguards, mandatory reporting to law enforcement when credible threats appear, independent oversight, and tighter controls to prevent banned users from slipping back in.

More lawsuits are expected.

And as this one unfolds, it’s likely to become a defining test of how far accountability stretches in the age of AI.

Wyoming Star Staff

Wyoming Star publishes letters, opinions, and tips submissions as a public service. The content does not necessarily reflect the opinions of Wyoming Star or its employees. Letters to the editor and tips can be submitted via email at our Contact Us section.