With input from the Verge, Business Insider, Bloomberg, and the Financial Times.
A powerful AI model wasn’t supposed to get out. Now it has – and the backlash is getting loud.
Anthropic’s Claude Mythos, a system the company itself flagged as too risky for public release, has reportedly ended up in the hands of unauthorized users. That’s raising uncomfortable questions about whether even tightly controlled AI can stay contained once it exists.
The model was never meant for broad access. Anthropic kept it behind closed doors, citing serious cybersecurity capabilities – the kind that can identify vulnerabilities with unusual precision. Instead of a public rollout, the company built a gated program, Project Glasswing, giving just 11 organizations a look. Think major players: Google, Microsoft, Amazon Web Services, Nvidia, JPMorgan.
Carefully controlled. At least, that was the idea.
Now the narrative has shifted from caution to containment failure.
OpenAI CEO Sam Altman isn’t holding back. In a recent podcast appearance, he accused Anthropic of leaning into fear to sell access – painting a picture of a company warning the world about a dangerous tool, then offering limited protection to a select few.
He likened it to building a bomb and then selling the shelter.
Altman didn’t name Anthropic outright in every remark, but the target was clear. He argued that some players in tech prefer keeping advanced AI in the hands of a small circle, using safety concerns – some valid, some overstated – to justify tighter control.
There’s a deeper fight here. Not just over one model, but over how AI should be released.
Altman is pushing for broader access, even if it means managing risk in real time. He says dangerous models are inevitable, and the goal should be to roll them out responsibly, not lock them away entirely. More people involved, more transparency – that’s his pitch.
Anthropic, led by Dario Amodei, has taken the opposite stance with Mythos. Keep it limited. Study it. Control who touches it. The company has pointed to real concerns: a system that can map out cyber weaknesses at scale could be misused quickly if released widely.
That argument gets harder to defend when the model leaks anyway.
The situation also pours fuel on an already tense rivalry. Amodei, a former OpenAI executive, built Anthropic into one of its biggest challengers. The two camps have been circling each other for years. Now the gloves are off.
Altman even hinted that the broader “AI doom” rhetoric – often associated with more cautious labs – may be doing more harm than good, feeding fear and, in some cases, escalating real-world tensions.
Meanwhile, the bigger issue looms: control versus access.
If cutting-edge AI can slip out despite tight restrictions, the idea of fully containing it starts to look shaky. And if companies lean too hard on exclusivity, they risk backlash from both competitors and the public.
One thing is clear – the Mythos episode isn’t just a security hiccup. It’s a preview of a much larger fight over who gets to build, use, and control the most powerful technology on the planet.









The latest news in your social feeds
Subscribe to our social media platforms to stay tuned