A legal fight between Anthropic and the Trump administration is heading to court, with broader implications for how far the US government can push private AI companies on military use.
The case opens Tuesday in San Francisco, where Anthropic is asking a federal judge to block a Defense Department ban imposed after the company refused to remove safety guardrails from its Claude AI model. Those guardrails are designed to prevent use in fully autonomous weapons and mass domestic surveillance.
At the centre of the dispute is a March 3 decision by Defense Secretary Pete Hegseth to label Anthropic a national security supply chain risk. The designation effectively bars the Pentagon and its contractors from using the company’s technology.
Anthropic moved quickly to challenge the decision, filing a lawsuit on March 9 and calling the designation “unprecedented and unlawful”. The company argues the move violates constitutional protections, including free speech and due process, and bypasses required procedures for such determinations.
The hearing will be overseen by US District Judge Rita Lin.
Supporters of Anthropic’s position frame the case as a test of whether the government can penalise companies for setting limits on how their technology is used. “AI-powered surveillance poses immense dangers to our democracy. Anthropic’s public advocacy for AI guardrails is laudable and protected by the First Amendment — not something the Pentagon should be punishing,” said Patrick Toomey of the American Civil Liberties Union.
The administration, however, rejects the idea that the decision was retaliatory. In court filings, the White House argues the dispute is rooted in national security concerns and contract considerations, not the company’s public stance on AI safety.
“Anthropic is not likely to succeed on the merits. Anthropic is not likely to succeed in showing that the Presidential Directive, the Secretary’s social media post, and the Secretarial Determination were retaliation for Anthropic’s expressions about the safety of its model and the responsible use of AI,” the filing said.
“The record reflects that the President and the Secretary were motivated by concerns about Anthropic’s potential future conduct if it retained access to the Government’s IT infrastructure. Those concerns are unrelated to Anthropic’s speech, and no one has purported to restrict Anthropic’s expressive activity,” it added.
Still, criticism of the Pentagon’s move has been building. Democratic Senator Elizabeth Warren has warned that the government may be pressuring companies to loosen safeguards around surveillance and autonomous weapons.
“I am particularly concerned that DoD [the US Department of Defense] is trying to strong-arm American companies into providing the Department with the tools to spy on American citizens and deploy fully autonomous weapons without adequate safeguards,” she said.
Legal analysts are also focusing on how the designation was made. Some point to a February 27 social media post by Hegseth announcing the move, arguing it may have sidestepped formal procedures required under the relevant statute.
“That [the X post] went far beyond what the law allows him to say. He also said the Pentagon hadn’t done any of the things required before declaring a supply chain risk under the statute,” said Charlie Bullock of the Institute for Law & AI.
“That was clearly illegal, and now the government, in its filings, is admitting that and instead saying everyone should have ignored it and that the real supply chain designation came several days later.”
The immediate question before the court is whether to grant a preliminary injunction that would pause the ban. But the underlying issue runs deeper: whether the government can effectively sideline a US tech company for refusing to adapt its products to military demands.









The latest news in your social feeds
Subscribe to our social media platforms to stay tuned