Anthropic has filed a lawsuit to block the US Department of Defense from placing the artificial intelligence company on a national security blacklist, escalating a dispute with the administration of US President Donald Trump over limits on how its technology can be used.
In a lawsuit filed Monday in federal court in California, the company argued that the designation was unlawful and violated its constitutional protections, including free speech and due process. Anthropic asked the court to reverse the designation and prevent federal agencies from enforcing it.
“These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” Anthropic said in its filing.
The legal challenge follows a decision last week by the Pentagon to classify the company as a supply-chain risk, a move that restricts the use of Anthropic’s technology in defense-related projects.
The designation came after Anthropic refused to remove certain safeguards from its AI chatbot, Claude. Those safeguards prevent the system from being used for fully autonomous weapons or domestic mass surveillance.
US Defense Secretary Pete Hegseth authorized the designation after months of negotiations between the government and the company over the restrictions. The Trump administration has said it plans to phase out the use of Anthropic’s technology in Pentagon systems within six months.
The company is also challenging a separate directive from Trump instructing federal employees to stop using Claude.
The confrontation reflects a broader debate unfolding across the technology industry about how artificial intelligence should be deployed in military operations and surveillance systems.
Anthropic said its policies are intended to prevent high-risk uses of AI. The company has argued that current AI systems are not reliable enough to safely operate fully autonomous weapons and that deploying them in that role could be dangerous.
Pentagon officials have taken a different view. They have insisted that the government must retain the ability to use AI for “any lawful use”, arguing that restrictions imposed by a private company could interfere with national defense.
The supply-chain risk designation is typically used to prevent foreign adversaries from accessing critical systems. Anthropic’s lawsuit notes that this is the first known instance in which the US government has applied the designation to an American company.
The dispute has also drawn in other major players in the AI industry. Shortly after the Pentagon penalized Anthropic, its rival OpenAI announced a separate agreement to work with the Defense Department.
Anthropic filed two lawsuits on Monday — one in federal court in California and another in the federal appeals court in Washington, DC — challenging different elements of the government’s actions.
Despite the legal fight, the company said it remains open to further discussions with the government and does not want a prolonged confrontation with federal agencies.
The Pentagon declined to comment on the lawsuits, though a defense official said last week that negotiations between the department and Anthropic were no longer active.
The designation could have significant implications for Anthropic’s defense-related work, though the company’s chief executive, Dario Amodei, said last week that the order has “a narrow scope”.
According to Amodei, businesses and government agencies can still use Claude in projects unrelated to the Department of Defense.
That distinction matters for the privately held company, which expects the majority of its projected $14bn in revenue this year to come from commercial and government customers using Claude for tasks such as computer programming.
More than 500 organizations are already paying at least $1m annually for access to the system, according to a recent investment announcement that valued Anthropic at $380bn.









The latest news in your social feeds
Subscribe to our social media platforms to stay tuned