Meta Sues Developer of AI ‘Nudify’ App Following Platform Abuse and Ad Violations

Meta has filed a lawsuit against Joy Timeline HK Ltd., the developer behind the AI-powered app CrushAI, which enables users to create non-consensual sexualized images, often referred to as “nudify” deepfakes.
The legal action comes in the wake of a CBS News investigation that uncovered hundreds of advertisements for such apps running on Meta platforms, including Facebook, Instagram, Messenger, and Threads.
In its filing, Meta accuses the Hong Kong-based company of repeatedly violating its advertising policies by circumventing review systems to place ads for its app, which uses generative AI to digitally remove clothing from photos of people without their consent. The lawsuit, filed in a Hong Kong district court, aims to prevent Joy Timeline from advertising on Meta’s services altogether.
“This legal action underscores both the seriousness with which we take this abuse and our commitment to doing all we can to protect our community from it,” Meta said in a statement.
The company said it had removed ads, blocked associated URLs, and deleted accounts involved in promoting the nudify apps, but noted that these operations have grown increasingly sophisticated in evading detection.
The CBS report highlighted that many of these ads were targeted at men between the ages of 18 and 65 in the US, UK, and Europe. Experts and advocacy groups have warned that such apps fuel blackmail, “sextortion” schemes, and exploitation—especially targeting women and minors. A similar investigation published by 404 Media in April had led Apple and Google to remove certain nudify apps from their app stores.
Senator Dick Durbin (D-IL) previously urged Meta CEO Mark Zuckerberg to explain how Joy Timeline was able to run thousands of ads in violation of the platform’s policies. Research cited in Durbin’s letter noted that over 8,000 CrushAI-related ads ran on Meta’s platforms during the first two weeks of 2024 alone.
Meta’s lawsuit details how Joy Timeline allegedly created a vast network of fake accounts, including over 170 business accounts and more than 135 Facebook pages, to bypass enforcement systems.
Some of the ads included explicit claims like, “Erase any clothes on girls” and “Upload a photo to strip for a minute.”
Beyond legal action, Meta says it has developed new technology to detect such ads, even when they don’t feature nudity directly. The company has also expanded collaboration with other tech firms by sharing data on abusive apps through Lantern, a platform under the Tech Coalition designed to address online child exploitation. Meta reported it has shared over 3,800 unique URLs with partner companies since March.
Despite these efforts, watchdogs say the problem persists. Alexios Mantzarlis, author of the Faked Up blog, said he found live nudify ads even as Meta announced its legal action.
“This abuse vector requires continued monitoring from researchers and the media to keep platforms accountable and curtail the reach of these noxious tools,” he said.
In response to broader concerns about generative AI misuse, Meta has also made changes to its content moderation policies. The company has scaled back some automated systems to prioritize the removal of the most severe violations, such as those related to terrorism or child sexual exploitation, while encouraging users to report other forms of abuse.
The proliferation of nudify apps reflects the wider challenges platforms face in regulating AI-generated content. Governments and child protection organizations have called for stricter laws. In the UK, the NSPCC urged lawmakers to ban nudify apps entirely, warning of their role in creating illegal material and harming children.
“The emotional toll on children can be absolutely devastating,” said Matthew Sowemimo of the NSPCC.