Anthropic Makes History as First U.S. Firm to Earn Key Designation After Pentagon Fallout

Anthropic has become the first American company to receive a “supply chain risk to national security” designation from the U.S. Department of Defense, following a high-profile clash over AI safety safeguards.

The move comes after Anthropic refused a Pentagon request to remove protections that block its AI systems from being used for domestic surveillance and automated weapon systems. In the wake of that refusal, the DoD issued the supply chain risk label—an unprecedented step for a U.S.-based AI company and one that immediately raised alarms across the tech and national security communities.

Anthropic CEO Dario Amodei confirmed the designation in a statement and said the company plans to challenge the decision in court, arguing that the action is “not legally sound.” At the same time, Anthropic emphasized that the practical impact on its broader business should be limited.

According to Amodei, the government’s letter is narrow in scope because the underlying statute—10 USC 3252—is narrowly written. He said the purpose of the law is to protect government supply chains rather than punish suppliers, and it requires the government to use the least restrictive approach needed to address supply chain concerns. In other words, even for defense contractors, the designation does not—and cannot—restrict the use of Anthropic’s Claude AI or business dealings with Anthropic when those activities are unrelated to specific Defense Department contracts.

That interpretation has been echoed by major partners, including Microsoft, which has indicated that non-defense projects using Anthropic technology are expected to remain unaffected. While the controversy has triggered significant public fallout and includes a president-ordered, six-month government-wide phaseout, Anthropic says it will continue supporting the military during the transition at a nominal cost.

The designation has also sparked intense backlash. Former intelligence officials, technology trade groups, and bipartisan U.S. lawmakers have criticized the decision, warning that penalizing an American AI company over ethical safeguards could set a dangerous precedent—one that may discourage responsible AI development and weaken the country’s technology leadership at a critical moment for national security and global competitiveness.

As Anthropic prepares its legal challenge, the dispute is shaping up to be a defining test of how far federal agencies can go in pressuring AI providers over model safeguards—and how the U.S. will balance defense interests with the push for ethical, responsible artificial intelligence.