Anthropic Takes On the U.S. Government Over Pentagon ‘Supply-Chain Risk’ Designation

Anthropic PBC is taking its fight with the US government to court after the Pentagon labeled the AI company a national security supply-chain risk. The lawsuit, filed on March 9 in federal court in California, challenges the Defense Department’s designation and argues that the decision was unlawful.

In its court filing, the San Francisco-based AI startup is asking a judge to block federal agencies from enforcing any directives tied to the supply-chain risk label. The case marks a major escalation in a growing dispute over how Anthropic’s AI technology can be used—particularly when it comes to military applications.

At the center of the conflict is Anthropic’s stance on restrictions around defense and military use of its tools, including its Claude chatbot. The company has reportedly refused to lift certain limits on how its AI can be deployed in military contexts, a position that has now collided with Pentagon procurement and security policies.

The defense fallout has been immediate. Following the designation, the US Department of Defense has moved its AI work away from Anthropic and Claude, signaling that the company’s relationship with government agencies could face serious long-term consequences unless the dispute is resolved.

The lawsuit now puts a spotlight on the growing tension between AI safety rules set by private companies and the government’s push to integrate artificial intelligence into national defense. As federal agencies lean more heavily on AI for analysis, logistics, and decision-support systems, classifications like “supply-chain risk” can dramatically reshape which vendors are considered eligible—and which are effectively sidelined.

For Anthropic, the legal challenge is not only about reversing a label. It’s also about protecting its ability to operate in government-adjacent markets without being constrained by a designation that can ripple across contracts, partnerships, and public trust. The court’s response could set an important precedent for how AI companies contest federal security assessments and how much control they retain over the permitted uses of their own technology.