The Trump administration is drawing a firm line in its legal fight with AI company Anthropic, arguing that the Pentagon’s decision to blacklist the firm is about national security and contracts — not censorship.
In a court filing submitted on Tuesday, US officials rejected Anthropic’s claim that the move violated its First Amendment rights. Instead, the administration framed the dispute as a breakdown in negotiations tied to how the company’s technology could be used, particularly in military and surveillance contexts.
At the centre of the case is a decision made earlier this month, when Defense Secretary Pete Hegseth designated Anthropic — the developer of the AI assistant Claude — as a national security supply chain risk. The move followed the company’s refusal to remove safeguards that prevent its systems from being used in autonomous weapons or domestic surveillance.
The administration’s legal argument is direct: the issue is conduct, not speech.
“It was only when Anthropic refused to release the restrictions on the use of its products — which refusal is conduct, not protected speech — that the President directed all federal agencies to terminate their business relationships with Anthropic,” the filing said.
It also emphasised that “no one has purported to restrict Anthropic’s expressive activity”.
That distinction matters. By framing the case around procurement decisions rather than expression, the government is attempting to move the dispute out of constitutional territory and into the more flexible space of national security and federal contracting.
Anthropic is pushing back on that framing. In its lawsuit, filed in California, the company argues that the designation is both “unprecedented and unlawful”, violating its rights to free speech and due process. It is seeking to block the Pentagon’s decision while the case proceeds.
In a statement responding to the latest filing, Anthropic signalled it would continue the legal fight.
“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”
The dispute reflects a deeper tension in how AI companies position themselves — and how governments expect them to operate. Anthropic has taken a clear stance against the use of its technology in autonomous weapons and domestic surveillance, arguing that current systems are not safe enough for those applications.
The administration sees that position differently. Officials have accused the company of undermining national security by limiting how its tools can be deployed, particularly in defence settings.
Beyond the legal arguments, the stakes are also financial and reputational. While the current designation applies to a limited set of military contracts, executives have warned it could trigger broader consequences, including significant losses and damage to the company’s standing with partners.









The latest news in your social feeds
Subscribe to our social media platforms to stay tuned