OpenAI just rolled out parental controls for ChatGPT, a move aimed squarely at the reality that teens are using AI for homework help, everyday advice — and sometimes for heavy stuff like mental health. The timing isn’t accidental: the features arrive after a wrongful-death lawsuit from the parents of 16-year-old Adam Raine, who, they say, got suicide methods from the chatbot in the months before he died. OpenAI says the new tools were built with guidance from Common Sense Media and other experts.
Here’s the gist in plain English. Parents can now link their ChatGPT account to a teen’s and set guardrails. Once accounts are connected, the teen’s experience automatically tightens: less graphic content, no sexual/romantic or violent role-play, and toned-down “viral challenge” and extreme-ideal material. From a single controls page, parents can dial things in further — setting time limits (“quiet hours”), switching off voice mode, disabling memory so chats aren’t saved for future responses, and turning off image generation. There’s also a switch to keep a teen’s conversations out of OpenAI’s model training. Teens can’t change these protections on their own; if they disconnect, the parent gets an alert.
Safety is the headline. OpenAI says ChatGPT will try to spot signs that a teen might be considering self-harm. If the system flags acute risk, a small, trained human review team takes a look. When the risk seems serious and parents can be reached, they’ll get an email, text, or push notification — without revealing the contents of the teen’s chats. The company says it’s also building a process to contact emergency services when it can’t reach a parent and believes there’s an imminent threat. OpenAI is upfront that this can mean false alarms, but argues it’s better to notify than to stay quiet.
None of this is foolproof. Teens can still use the free, no-account version of ChatGPT, and determined users can try to “prompt” their way around safety rules — the very loophole Adam’s parents say he used by framing requests as fiction. OpenAI acknowledges guardrails can erode in long conversations and says it’s working on an age-prediction system to automatically apply teen settings when it suspects a user is under 18. For now, linked accounts are the surest way to enforce restrictions.
The effort stretches beyond core ChatGPT. OpenAI says its Sora app now has parallel parental controls: parents can opt into a non-personalized feed, govern whether teens can send or receive DMs, and decide if scrolling pauses instead of streaming an endless feed. The company has also stood up an Expert Council on Well-Being and AI, alongside a global physician network, to shape youth safety policies and interventions over the next 120 days and beyond.
Privacy, obviously, sits at the center of all this. OpenAI says it shares the minimum needed to keep a teen safe, keeps content controls parent-visible, and lets families opt out of model training. It also stresses that these tools work best as part of a bigger family conversation: how and when AI fits into daily life, what to do in a crisis, and where to find real mental-health support.
ChatGPT now gives parents real levers — time limits, content filters, feature shutoffs, and crisis alerts — to shape a teen’s experience. The controls won’t stop every bad interaction or clever workaround, but they raise the floor — and make it harder for a teen to be alone online when the stakes are highest.
The New York Times, FOX News, and OpenAI contributed to this report.
The latest news in your social feeds
Subscribe to our social media platforms to stay tuned