A legal confrontation is brewing between artificial intelligence firm Anthropic and the United States Department of Defense after the Pentagon moved to block the company from certain military contracts. Legal analysts say the company may have a solid argument that the government stretched the limits of a little-known security law.
The dispute centers on a lawsuit filed by Anthropic after defense officials classified the company as a “supply chain risk,” effectively shutting it out of some Pentagon procurement opportunities. According to the company, the designation not only threatens billions in projected revenue but also damages its reputation across the rapidly expanding AI sector.
At the heart of the case is a rarely invoked federal provision known as Section 3252, designed to shield military technology systems from sabotage or infiltration by hostile actors. While the statute grants the defense secretary authority to exclude vendors from contracts if national security systems might be compromised, legal researchers note that the law has almost never been used — and, until now, had not been applied against a U.S.-based company.
Anthropic argues the move crosses constitutional boundaries. In its complaint, the company claims the designation violates free speech protections and due-process rights, asserting that the action was motivated by disagreements over the company’s stance on artificial intelligence in warfare rather than genuine security concerns.
The company’s AI model, Claude, has already been used in limited military contexts, including operational support during strikes involving Iran. Yet tensions escalated after Anthropic refused to remove internal safeguards that prohibit its technology from being used in autonomous weapons or domestic surveillance.
Shortly afterward, Pete Hegseth formally labeled the company a supply chain risk. Anthropic’s lawsuit says the order provided no detailed explanation of how the AI system threatened military networks — despite the Pentagon previously expressing interest in working with the technology.
Under U.S. law, supply-chain restrictions are meant to prevent adversaries such as China, Russia, Iran, and North Korea from infiltrating federal information systems. Critics of the Pentagon’s decision argue that Anthropic’s usage policies — which focus on limiting potentially harmful applications of AI — do not resemble the type of sabotage risk the statute was built to address.
Legal observers also point to sharp public criticism of the company from Donald Trump and defense officials, suggesting those statements could complicate the government’s defense by implying political motivations behind the decision.
Still, the government holds a powerful advantage in court: judges traditionally give wide latitude to national-security decisions made by the executive branch. Federal attorneys are expected to argue that the president and defense leadership have broad authority to decide which companies can supply sensitive military technology.
Anthropic’s legal strategy also invokes the Administrative Procedure Act, which allows courts to overturn government actions deemed arbitrary or abusive of discretion. The company contends the Pentagon’s position is internally inconsistent — praising its technology while simultaneously declaring it too risky for contracts.
If the courts take up the challenge, the case could become a landmark test of how far national-security powers extend when applied to private AI developers — and how the government navigates the uneasy intersection of emerging technology, military strategy, and constitutional rights.


