Pentagon Threatens to Cut Ties with Anthropic Over AI Supply Chain Risk
Pentagon May Designate Anthropic as Supply Chain Risk

Pentagon Considers Designating Anthropic as Supply Chain Risk

Defense Secretary Pete Hegseth is reportedly furious with Anthropic, as the Pentagon nears a decision to cut business ties and label the AI company a "supply chain risk." According to an Axios report, this designation would require any firm doing business with the U.S. military to sever connections with Anthropic, a move typically reserved for foreign adversaries.

Implications of the Supply Chain Risk Designation

If designated, companies working with the Pentagon must certify they do not use Claude, Anthropic's AI model, in their workflows. This could impact numerous businesses, given Anthropic's claim that eight of the top ten U.S. companies utilize Claude. The penalty underscores the gravity of the situation, potentially disrupting AI integration across defense sectors.

Breakdown in Negotiations and Pentagon Frustration

Talks between Anthropic and the Pentagon have collapsed after months of disputes over terms for military use of Claude. CEO Dario Amodei's public concerns about AI risks reportedly angered Pentagon officials. A source indicated that senior defense leaders have long been frustrated with Anthropic, seizing this opportunity for a public confrontation.

Claude is currently the only AI model accessible in U.S. military classified systems and is a leader in business applications, despite Pentagon praise for its capabilities. It was the first model integrated into these networks, highlighting its strategic importance.

Official Statements from the Pentagon and Anthropic

Pentagon spokesman Sean Parnell stated, "The Department of War’s relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people." A senior official added, "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

An Anthropic spokesperson countered, "We are having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right." The company reaffirmed its commitment to using frontier AI for national security, noting Claude's pioneering role in classified networks. Another official highlighted legal gaps, stating, "There are laws against domestic mass surveillance, but they have not in any way caught up to what AI can do." They explained that AI can analyze public information at scale, such as social media data, which the military is legally allowed to collect but was previously limited by human capacity.

Broader Impact and Future Negotiations

Anthropic secured a two-year agreement with the Pentagon last year for Claude Gov models and Claude for Enterprise prototypes. Analysts suggest this dispute could set a precedent for Pentagon talks with other AI firms like OpenAI, Google, and xAI, which are not yet used in classified work. The Pentagon is negotiating with these companies for classified access, insisting on an "all lawful purposes" standard for both classified and unclassified uses.