With input from Axios, the Hill, Business Insider, and Forbes.
Anthropic is facing fresh heat from Washington – and this time it’s not just about AI hype, but how safely the company is building its tools.
Rep. Josh Gottheimer, a New Jersey Democrat who’s been vocal on tech and national security, is demanding answers after a messy week for the AI firm. His concerns center on two things: a recent source code leak tied to Anthropic’s Claude Code tool, and a quieter shift in the company’s own safety commitments.
Put together, it’s raising eyebrows.
The code leak came first. Reports surfaced that more than 500,000 lines of internal code tied to Claude Code were accidentally exposed. Anthropic insists it wasn’t a hack – just human error during a release process – and says no sensitive customer data or credentials were compromised.
Still, the internet moved fast. Copies spread across GitHub and private channels before the company scrambled to contain them. Thousands of versions popped up, some briefly taken down, others still circulating. Within hours, developers were picking through the code, dissecting features, and even rebuilding versions of the tool in other programming languages.
For a company that markets safety as a core advantage, it wasn’t a great look.
Gottheimer didn’t wait long to respond. In a letter to CEO Dario Amodei, he questioned how something like this could happen – especially given that Claude has already been targeted by foreign actors.
“Given that we know Claude has been a repeated target of malign Chinese Communist Party actors… I don’t understand why Anthropic would risk walking back any of its security measures,” he wrote.
That last part points to the second issue.
Back in February, Anthropic quietly updated its internal safety policy. The company had previously committed to halting development of its AI models if they outpaced its ability to manage risks. That language is now gone. In its place, Anthropic says it will track progress against “nonbinding but publicly-declared” goals.
It’s a subtle shift, but in Washington, subtle shifts matter.
To critics, it sounds like moving from a hard stop to more of a self-assessment system. Less obligation, more discretion. For lawmakers already worried about how quickly AI is advancing, that’s not exactly reassuring.
Gottheimer is threading a careful line. He’s openly criticized the Trump administration’s decision to block Anthropic from government contracts, arguing it could slow the US in the global AI race. At the same time, he’s pressing the company to tighten its defenses, not loosen them.
The concern isn’t abstract.
Anthropic disclosed last year that Chinese-linked hackers had used its coding tools in a large-scale cyberattack with limited human input. More recently, the company flagged what it called “industrial-scale” efforts by China-based labs to extract and replicate its AI capabilities – a process known as distillation.
That’s where things get sensitive.
If rivals can reverse-engineer how Claude works – or worse, how it’s used in national security contexts – the implications go beyond business competition. Gottheimer’s warning is blunt: losing that edge could undermine US security advantages.
“We cannot allow the CCP to reverse engineer and exploit American AI,” he wrote.
The congressman is now asking Anthropic to spell out how it plans to prevent that from happening. That includes questions about future risks, how upcoming models might be misused, and whether the company expects attacks to scale as its technology improves.
Anthropic, for its part, is trying to contain the fallout.
The company says the leak was a simple mistake, not a systemic failure. It’s rolling out additional safeguards and tightening its release processes. Internally, executives have emphasized that the exposed code didn’t touch core AI models or sensitive data.
But outside the company, the reaction has been mixed.
Some developers see the leak as a rare window into how cutting-edge AI tools are built. Others see it as a sign of how fragile these systems can be when speed becomes the priority. In an industry racing to ship new models, even small errors can spiral quickly.
That tension – move fast or lock things down – is becoming harder to ignore.
There’s also a broader shift underway. As AI systems write more of their own code and handle increasingly complex tasks, fewer people fully understand how everything fits together. That makes debugging harder. It also makes mistakes – like accidental leaks – more difficult to predict.
For policymakers, that’s a problem.
Gottheimer’s letter is part of a growing push in Washington to get ahead of those risks before they turn into something bigger. AI tools are no longer just experimental – they’re being used in defense, intelligence, and critical infrastructure. That raises the stakes.
Anthropic isn’t the only company under scrutiny, but right now it’s in the spotlight.
A leaked codebase, a softened safety pledge, and rising geopolitical tensions – it’s the kind of combination that gets attention fast in Washington. And it’s unlikely to fade anytime soon.









The latest news in your social feeds
Subscribe to our social media platforms to stay tuned