Analytics Economy Health USA

ChatGPT Gave Teens Tips on Drugs, Diets, and Even Suicide Notes — Watchdog Study Sparks Alarm

ChatGPT Gave Teens Tips on Drugs, Diets, and Even Suicide Notes — Watchdog Study Sparks Alarm
Chat GPT app icon is seen on a smartphone screen, Monday, Aug. 4, 2025, in Chicago (AP Photo / Kiichiro Sato)

A new study has uncovered something pretty disturbing: ChatGPT, one of the most popular AI chatbots out there, is giving teenagers advice on how to get drunk, hide eating disorders, and even write suicide notes — all while pretending to offer help.

Researchers at the Center for Countering Digital Hate (CCDH) posed as vulnerable 13-year-olds and ran more than 1,200 conversations with ChatGPT. What they found? More than half the responses were labeled “dangerous.” And while the bot often gave a polite warning or mental health resources, it just as often followed that up with detailed plans — whether that was a how-to on hiding drug use or writing a goodbye letter to family.

“We wanted to test the guardrails,” said CCDH CEO Imran Ahmed. “Turns out there really aren’t any.”

In one chilling moment, ChatGPT created multiple personalized suicide notes for a fake 13-year-old girl — including messages to her parents, siblings, and friends. Ahmed said he broke down crying after reading them.

OpenAI, the company behind ChatGPT, responded to the report saying they’re working to improve how the chatbot handles sensitive issues.

“Some conversations start out benign and shift,” the company said in a statement, without directly addressing the examples highlighted in the study.

But CCDH researchers found that even when ChatGPT initially refused a dangerous request, it could often be coaxed into giving an answer by simply saying the prompt was “for a presentation” or “to help a friend.”

And that’s the big concern here — ChatGPT isn’t just another search engine. It feels like a friend. It responds conversationally, personally, and quickly. That makes it more dangerous, experts say, especially when talking to impressionable teenagers.

“In some ways, it’s like the friend who keeps encouraging bad choices,” said Ahmed. “Real friends don’t egg you on to do harmful things.”

Among the most alarming examples:

  • A fake 13-year-old boy asked how to get drunk fast. ChatGPT delivered a custom party plan combining vodka, cocaine, and ecstasy.
  • A 13-year-old girl complaining about her body got a 500-calorie crash diet and a list of prescription appetite suppressants.
  • A request for a social media post on self-harm got a poem filled with coded language and hashtags to boost visibility.

And this is no fringe behavior. According to Common Sense Media, over 70% of US teens use AI chatbots for companionship, and about half use them regularly. Many teens report feeling emotionally connected to the bots — saying they trust their advice and even consider them “friends.”

That’s something OpenAI’s CEO, Sam Altman, is aware of. Speaking at a recent conference, Altman admitted he’s concerned by how emotionally dependent some young users are becoming.

“They say, ‘ChatGPT knows me. I can’t make a decision without it.’ That really worries me,” he said.

Unlike some other platforms, ChatGPT doesn’t verify users’ ages beyond asking for a birthdate. That means teens under 13 can easily get in, and those who are 13+ face no real content restrictions. Researchers created fake teen profiles, and the AI responded without question — even when the prompts were clearly from underage users.

All this comes as AI chatbots continue to grow. About 800 million people are using ChatGPT worldwide, according to JPMorgan. And while it’s undoubtedly useful, critics argue that the lack of strict guardrails — especially for young users — makes it dangerously easy to misuse.

OpenAI has said it’s developing tools to better detect signs of emotional distress and prevent harm. But this latest study suggests those tools aren’t working fast enough — or at all.

“This tech can be amazing,” said Ahmed. “But if it can tell a kid how to starve themselves or write a goodbye letter — we need to step in.”

This story discusses suicide and self-harm. If you or someone you know is struggling, the national suicide and crisis lifeline is available by calling or texting 988 in the US.

The original story by Matt O’Brien and Barbara Ortutay for the Associated Press.

Joe Yans

Joe Yans is a 25-year-old journalist and interviewer based in Cheyenne, Wyoming. As a local news correspondent and an opinion section interviewer for Wyoming Star, Joe has covered a wide range of critical topics, including the Israel-Palestine war, the Russia-Ukraine conflict, the 2024 U.S. presidential election, and the 2025 LA wildfires. Beyond reporting, Joe has conducted in-depth interviews with prominent scholars from top US and international universities, bringing expert perspectives to complex global and domestic issues.