NPR. Business Insider, BBC, the Guardian, the Washington Post, Market Watch, and Reuters contributed to this report.
Three Tennessee teens have filed a class-action lawsuit in California accusing xAI of enabling the creation and distribution of AI-generated nude and sexually explicit images and videos of them when they were minors. The complaint — harrowing in detail — says the model behind an app the perpetrator used produced lifelike deepfakes that were then shared on Discord and traded online.
The legal blast:
- The plaintiffs say an unnamed app running xAI’s model helped turn photos (some taken from yearbooks and social media) into videos and images that showed the girls naked or performing sexual acts. One passage in the complaint calls the result a “rag doll brought to life through the dark arts,” arguing the images look real to viewers and “for the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse.”
- Law enforcement arrested a suspect in December after investigators recovered hundreds of AI-manipulated sexual images of minors, the suit says. The complaint also alleges the perpetrator shared material involving at least 18 other people.
- The plaintiffs are suing for emotional distress and other damages, and they want the court to order xAI to stop enabling this kind of content.
The lawsuit accuses xAI of more than negligence — it claims the company knowingly licensed its tech in ways that let app makers (sometimes outside the US) build tools predators could use, and that xAI didn’t put in basic safeguards like watermarking that some rivals do. The plaintiffs say that, unlike other big AI firms that added visible AI-origin markers to images, xAI failed to adopt similar protections — making it easier for abusers to pass off fake material as real.
This case lands amid a wider scramble over the company’s image tool called Grok and its “spicy” modes, which critics say were used to generate millions of sexualized images — including thousands that involved children, according to outside researchers. Governments and safety watchdogs have piled in: the UK’s Ofcom, Australia’s eSafety and the European Commission have opened inquiries or warned the company to clean up its systems.
xAI and its social platform X have faced other legal headaches this year — including a suit from influencer Ashley St. Clair over allegedly AI-generated underage images posted on X. Meanwhile, founder Elon Musk previously denied knowledge that Grok had generated naked images of minors, saying he was “not aware of any naked underage images generated by Grok. Literally zero.” The new complaint says those denials ring hollow given how the tools were designed and distributed.
A few more takeaways
- The plaintiffs say the fake images weren’t labeled as AI-generated, making the harm feel even more real and permanent to victims.
- Their lawyer framed the lawsuit as an effort to change how AI firms weigh profits against the predictable harms of sexual exploitation:
“We want to make it one [a business decision] that does not make any business sense anymore,” she said.
xAI didn’t respond to requests for comment in the filings cited by press outlets. Regulators and prosecutors are already probing related incidents, and the arrest tied to this case is likely to feed both criminal and civil proceedings.
This suit pushes a wrench into the debate over AI responsibility. It’s not just about buggy outputs or misinformation anymore — plaintiffs say these systems are actively facilitating child sexual abuse, and they’re asking courts to force companies to stop designing products that make that possible. Expect regulators and plaintiffs’ lawyers to treat this as a test case on how far companies must go to prevent AI-enabled sexual exploitation.









The latest news in your social feeds
Subscribe to our social media platforms to stay tuned