A new study reveals a concerning trend: a significant number of American teenagers are being misled by fake content generated by artificial intelligence, CNN reports.
The research, published by Common Sense Media, a non-profit advocacy group, surveyed 1,000 teenagers between the ages of 13 and 18 about their experiences with AI-generated media.
The study found that 35% of teenagers reported being deceived by fake photos, videos, or other online content produced by generative AI tools. A larger portion, 41%, reported encountering real content that was presented in a misleading way. Alarmingly, 22% admitted to sharing information online that later turned out to be false.
These findings come as the adoption of artificial intelligence by teenagers is rapidly increasing. A previous Common Sense Media study from September showed that 70% of teenagers have at least tried using generative AI tools. The rise of AI has created an environment where it’s becoming increasingly difficult to discern fact from fiction online.
The AI landscape has become increasingly crowded, exemplified by the recent entry of DeepSeek into the market, but even the most sophisticated AI models are not immune to “hallucinations,” according to a July 2024 study from Cornell, the University of Washington and the University of Waterloo. This means that even the top AI platforms can create false information seemingly out of thin air.
Adding to concerns, teenagers who reported encountering fake online content were also more likely to believe that AI will make it even harder to verify information on the internet.
The study also delved into teenagers’ attitudes towards major technology corporations. Nearly half of the teenagers surveyed said they do not trust “Big Tech” companies like Google, Apple, Meta, TikTok, and Microsoft to make responsible decisions about their use of AI. This distrust mirrors a growing dissatisfaction with major tech companies among adults in the United States, who also grapple with the increasing prevalence of misleading and fake content online.
The study pointed to the erosion of digital safeguards as a contributing factor. The acquisition of Twitter by Elon Musk in 2022 and its subsequent renaming to X has been cited as a prime example. Since the takeover, moderation teams have been significantly reduced, allowing misinformation and hate speech to spread, and previously banned accounts of conspiracy theorists have been reinstated. Meta has also recently moved to replace third-party fact-checkers with Community Notes, which, according to CEO Mark Zuckerberg, will likely lead to more harmful content appearing across Facebook, Instagram, and its other platforms.









The latest news in your social feeds
Subscribe to our social media platforms to stay tuned