“AI slop” used to be a snarky term teenagers tossed around to dunk on lazy, auto-generated content. Now it’s a pretty accurate label for what’s flooding our screens. Short, convincing, and endlessly personalized clips are arriving faster than we can blink, and the tools making them are moving from labs to lock screens with startling speed.
OpenAI’s Sora and Meta’s Vibes are the new headliners. They don’t just synthesize scenery; they synthesize you. Sora’s hook is the “cameo,” a quick selfie capture that becomes a reusable digital likeness. Record a few seconds, and the app can drop your face — and your voice — into an ’80s rom-com, a faux documentary, or an action sequence on the moon. Friends can remix you into their videos if you allow it. That single design choice turns AI video from a neat demo into social rocket fuel, because it gives every clip the thing algorithms love most: a familiar face with emotional context.
Under the hood, these models work a lot like the chatbots you already know. Instead of predicting the next word, they predict the next sliver of moving image. Fed on oceans of internet footage, they learn the patterns of how light falls, how bodies move, how fabrics ripple in the wind. Give them a sentence — “two twenty-somethings picnicking in Central Park, one of them a vintage CRT-headed robot” — and they hallucinate a plausible world in motion. That plausibility is the pivot point. When a generated video looks close enough to a phone shot, we stop asking if it’s real and start asking if it’s entertaining.
The entertainment part is undeniable. AI cat soap operas with melodramatic betrayals rack up millions of views. Sora shot to the top of the App Store even with invite-only access. TikTok continues to seed AI effects across its platform while ByteDance builds deeper generative models. A German lab is experimenting with Meta on video synthesis. It’s not just that the clips are easy to make; it’s that they’re social by default. Cameos let friend groups cast each other in skits, brands can spin out infinite variations of spots, and creators can prototype whole formats in an afternoon. The barrier to entry for watchable video has collapsed, and the incentives to post more, faster, and louder are lined up like dominos.
The trouble is what rides in on that same wave. The most obvious risk is the one we’ve been warning about for years but haven’t really had to face at scale: anyone can make a high-fidelity fake of anyone else doing just about anything. With a few prompts, you can conjure a politician accepting a bribe, a CEO shoplifting a graphics processor, or a local official slurring their words at a bar. Watermarks, filters, and public-figure blocks will help on the big platforms, but smaller apps and open-source tools will not all play by the rules. The bad actors won’t need to be brilliant; they just need to be fast, because in a feed environment speed is truth until proven otherwise.
Identity hijacking is the quieter, more personal version of the same problem. When you hand an app your cameo, you’re granting a powerful license to your likeness, even if the terms say it’s limited. Friends can paste you into situations you never chose. Accounts can be compromised. A joke can travel without its setup. We’ve already seen how quickly reputations can be dented by misleading screenshots; moving video raises the stakes.
Even if you never get deepfaked, there’s a broader shift underway that’s just as consequential. As more of what we watch is generated on demand, our media diets become micro-targeted, novel, and intensely sticky. Imagine a feed where every third clip stars someone you know, every fifth is tuned to your private nostalgia, and the pacing matches your exact attention span. That’s not science fiction; it’s the product roadmap. If phone addiction felt bad when the content was generic, it will feel worse when it’s made for you, about you, and starring you.
Newsrooms and educators are staring down the same barrel from another angle. Verification turns into the first, second, and third job. Editors will need to treat every video as suspect until proven otherwise, and will have to explain their methods to a skeptical audience.
“How we verified this footage” blurbs may start appearing next to headlines the way photo credits do. Legal departments will be busy drafting policies for staff likenesses and takedown procedures for impersonations. The production side will adopt AI as a normal tool — cleaning audio, generating B-roll, animating explainers — but the bright line will shift from “never use AI” to “always disclose how you used it.”
Platforms and regulators can help, but they can’t fix it alone. Provenance will have to be pushed down into the file level, with cryptographic signatures that survive editing and sharing. Apps can default to visible “synthetic” labels for generated media and demote unlabeled clips in recommendation engines. Consent for likeness use should be treated as a first-class setting, not a buried toggle. Legislatures can target clearly harmful behaviors — non-consensual sexual deepfakes, election falsehoods, incitement — without criminalizing everyday remix culture. The firms building these systems can keep tightening their filters, restricting public-figure cloning, and giving copyright owners more control, as OpenAI signaled after its early stumbles. But none of that reaches the tools that refuse to play along, and those tools will exist.
The cultural adaptation may end up being the most important layer. People will get savvier about spotting tells and asking for corroboration, the way we eventually learned not to click on every sensational headline. Media literacy will have to catch up to the idea that “looks real” is not evidence. Communities will develop norms around when it’s funny to cameo a friend and when it’s creepy. Brands that value trust will lean into transparency and show their work. Outlets that can verify quickly and explain clearly will stand out in a slop-soaked feed.
None of this is to say the creative upside isn’t real. Plenty of great art is born from cheap tools in the right hands. Teenagers making zero-budget films is a delight. Small businesses crafting polished how-tos and local groups making accessible explainers are wins. The problem isn’t creation; it’s confusion. When the internet stops being a reasonably reliable library and turns into a hall of mirrors, democratic decision-making and everyday trust both take a hit.
So what awaits us is a noisy, inventive, and at times chaotic media space where the most precious commodity is not attention but credibility. AI slop won’t disappear; it will become background radiation. The job for platforms is to make authenticity legible. The job for newsrooms is to prove what’s true, not just assert it. The job for the rest of us is to pause before we share, to be careful with our likenesses, and to recalibrate our default skepticism without sliding into nihilism.
It’s tempting to hope this is a fad that will crest and fade. It won’t. The genie is not only out of the bottle; it’s launching product updates. The best we can do is meet it with clear labels, smarter habits, and an insistence that the things we’re basing our lives on — votes, finances, reputations — rest on firmer ground than whatever a prompt can produce in five seconds.
With input from the New York Times, the Wall Street Journal, and Flowing Data.
The latest news in your social feeds
Subscribe to our social media platforms to stay tuned