Crime Culture Economy Politics USA

FTC Puts ‘AI Friends’ on Notice: Child Safety, Monetization, and Age Checks Under the Microscope

FTC Puts ‘AI Friends’ on Notice: Child Safety, Monetization, and Age Checks Under the Microscope
Olivier Douliery / AFP / Getty Images

The government’s tech watchdog just put the “AI companion” boom on blast. The Federal Trade Commission (FTC) has launched a sweeping inquiry into how seven major players—Alphabet (Google), OpenAI, Character.AI, Snap, xAI, Meta and Instagram—build, police, and profit from chatbots that can feel like friends, especially to kids.

The FTC is demanding detailed information on how these bots are designed, tested, and moderated; how age limits are enforced; what data they collect; and how engagement is monetized. Regulators also want to see what safeguards kick in when conversations veer into risky territory—think self-harm, sex, or manipulation.

“Protecting kids online is a top priority,” FTC Chair Andrew Ferguson said, adding the agency also wants to ensure the US “maintains its role as a global leader in this new and exciting industry.”

These tools don’t just answer questions—they mimic human conversation and emotions, often positioning themselves as confidants. That design can blur lines for children and teens, who may over-trust a bot that flatters, reassures, or nudges them. Clinicians have even warned about “AI psychosis”—when heavy chatbot use contributes to people losing touch with reality.

The stakes are already painfully real. Families have filed lawsuits tying teen suicides to prolonged conversations with chatbots. In California, the parents of 16-year-old Adam Raine are suing OpenAI, alleging ChatGPT “validated his most harmful and self-destructive thoughts.” OpenAI said it is reviewing the case and offered condolences. Character.AI is facing a similar suit. And Meta was blasted after internal guidelines surfaced indicating its AI companions had, at one point, been permitted to engage in “romantic or sensual” chats with minors—a policy the company says was erroneous and has since been tightened.

Using its Section 6(b) authority (which allows broad fact-finding without immediately bringing enforcement actions), the FTC ordered the companies to hand over:

  • Safety design & testing: How characters/personalities are created and approved; how companies measure risks to kids; interventions used in “sensitive” chats.
  • Age gates & enforcement: How platforms verify age and block under-13 data collection; how they monitor and enforce rules.
  • Data practices & disclosures: What personal information is gathered or shared; how parents are informed.
  • Monetization mechanics: How engagement is monetized; whether profit incentives conflict with safety measures; how “sticky” companion features are.

The agency says it will scrutinize how firms balance growth and guardrails, and whether vulnerable users—not just kids—are adequately protected.

How the companies are responding

  • OpenAI: Says safety “matters above all else when young people are involved,” acknowledges protections can be less reliable in long conversations, and is “committed to engaging constructively” with the FTC.
  • AI: Welcomes the inquiry, highlighting investments in trust & safety, a mode for under-18 users, and prominent disclaimers that characters are fictional.
  • Snap: Backs “thoughtful development” of AI that balances innovation with safety.
  • Alphabet, xAI, Meta/Instagram: Declined or did not immediately comment; Meta has tightened interim policies to block teen chatbot discussions around self-harm and “potentially inappropriate romantic” topics.

Since ChatGPT’s breakout, companion bots have multiplied, fueled by a US loneliness epidemic and the stickiness of AI that praises and agrees. High-profile tech leaders are leaning in: Elon Musk added a “Companions” feature to xAI’s Grok, and Mark Zuckerberg predicts personalized AI that “understands” you will go mainstream.

But with popularity comes risk and revenue. Companion apps keep users engaged—sometimes for hours—raising questions the FTC now wants answered: Are these products safe by design? Do business models reward risky engagement? Are parents and teens fully informed?

The probe doesn’t automatically mean punishments are coming. Under 6(b), the FTC can issue a public report—and it can later use what it learns to launch enforcement if it sees violations. For now, regulators are signaling a clear expectation: if you build AI “friends,” you’d better prove they’re safe for kids—and be transparent about how you profit from their attention.

Bloomberg, CNBC, BBC, the Financial Times, and Axios contributed to this report.

Joe Yans

Joe Yans is a 25-year-old journalist and interviewer based in Cheyenne, Wyoming. As a local news correspondent and an opinion section interviewer for Wyoming Star, Joe has covered a wide range of critical topics, including the Israel-Palestine war, the Russia-Ukraine conflict, the 2024 U.S. presidential election, and the 2025 LA wildfires. Beyond reporting, Joe has conducted in-depth interviews with prominent scholars from top US and international universities, bringing expert perspectives to complex global and domestic issues.