Axios, BBC, and the Independent contributed to this report.
Here’s a familiar script: a tech company builds a new AI so powerful it might break the world. Too risky to release. Too dangerous to trust. But don’t worry – they’ve got it under control. For now.
That’s the message from Anthropic about its latest model, Claude Mythos. The company says it can uncover cybersecurity flaws at a level beyond human experts, with potential fallout for economies and national security if it lands in the wrong hands. The implication hangs in the air: this thing is big. Possibly too big.
Skeptics aren’t buying it wholesale. And even if Mythos is impressive, the tone isn’t new. AI leaders have been warning for years that their creations could spiral into catastrophe. Strange pitch, though. Most companies don’t market their products by hinting they could destroy civilisation.
So why do it?
One explanation: fear works. It grabs attention, shapes the narrative, and – critics argue – redirects scrutiny. While the public worries about hypothetical doomsday scenarios, more immediate concerns get less airtime: environmental costs, labour practices, misinformation, mental health risks.
There’s also a power angle. Frame AI as something vast and uncontrollable, and suddenly only a handful of companies seem capable of managing it. That can sideline regulators and concentrate influence.
“If you make it sound almost supernatural,” says ethicist Shannon Vallor, “people feel outmatched – and start looking to the companies themselves for protection.”
This pattern goes back years. In 2019, OpenAI held back GPT-2 over fears of misuse, only to release it later. Concerns, it turned out, were overstated. Even Sam Altman has leaned into apocalyptic rhetoric at times, warning about existential risks while simultaneously pushing AI forward at speed.
Anthropic’s Mythos follows a similar arc: bold claims, limited access, and a warning label. The company says it’s working with dozens of partners to patch vulnerabilities before bad actors exploit them. But some experts point out what’s missing – like data on false positives, a standard way to judge how useful a security tool actually is.
“There are gaps in the story,” says AI researcher Heidy Khlaaf.
The tech might be strong, she adds, but the sweeping claims aren’t fully backed up.
Behind all this sits a simple reality: these are companies chasing growth. OpenAI started as a nonprofit. Anthropic spun out over safety concerns. Now both are racing toward massive valuations and deeper market dominance. Safety still matters – but so do incentives.
And while the industry talks about extinction-level risks, other issues keep stacking up. AI tools are already reshaping jobs, influencing behaviour, and raising fresh legal and ethical questions. Some systems have been linked to harmful outcomes, including mental health crises. Data centres are driving up emissions. Deepfakes are getting harder to spot.
The apocalypse narrative can make those problems feel smaller by comparison.
Still, the warnings aren’t entirely empty. AI is advancing fast – faster than regulation, faster than public understanding. Companies themselves admit they don’t fully know where it leads. That uncertainty is real.
But framing the future as a choice between utopia and collapse misses something important: these systems aren’t forces of nature. They’re built, sold, and controlled by people.
And people can set rules.
The question isn’t whether AI is powerful. It is. The question is who gets to shape it – and whether fear is helping that conversation, or quietly steering it.









The latest news in your social feeds
Subscribe to our social media platforms to stay tuned