Pentagon Escalates Standoff with Anthropic Over AI Military Use
US Military Pressures Anthropic Over AI Model Restrictions

Pentagon Escalates Standoff with Anthropic Over AI Military Use

The United States Department of Defense has significantly intensified its confrontation with artificial intelligence company Anthropic by reportedly reaching out to major defense contractors, including industry giants Boeing and Lockheed Martin. The Pentagon has requested these contractors to evaluate and assess how they utilize Anthropic's Claude AI model within their operations and systems.

Ultimatum and Supply Chain Risk Designation

This development emerges just hours before the Pentagon's critical deadline and ultimatum to Anthropic regarding the military application of its advanced AI technology. According to a detailed report by Axios, the Defense Department contacted two of the nation's largest defense and aerospace corporations. This action is widely interpreted as an initial step toward potentially declaring Anthropic a supply chain risk.

Such a designation, typically reserved for companies like China's Huawei, carries severe and far-reaching consequences, including restrictions on government contracts and heightened regulatory scrutiny. The move underscores the Pentagon's growing concerns about dependency on external AI providers for sensitive military operations.

Responses from Defense Contractors

Boeing confirmed through official statements that it currently maintains no active contracts with Anthropic. A Boeing executive elaborated, stating, "We sought their partnership in the past but ultimately could not reach an agreement. They exhibited reluctance to engage with the defense industry."

Meanwhile, Lockheed Martin acknowledged that it had been contacted by the Defense Department for an analysis of its exposure and reliance on Anthropic's technology. This assessment is preparatory to a potential formal supply chain risk declaration. The report further indicates that the Pentagon plans to extend these inquiries to all major defense contractors, commonly referred to as "the primes", to comprehensively understand their current usage of Claude AI.

Significance for Anthropic and Military Operations

Anthropic's Claude AI currently holds the unique distinction of being the only AI model operating within the United States military's classified systems. This privileged position means Claude has already been deployed in highly sensitive operations, including the mission to capture Venezuelan President Nicolás Maduro, facilitated through Anthropic's partnership with data analytics firm Palantir.

The Pentagon has reportedly been impressed by Claude's advanced capabilities but has grown increasingly frustrated with Anthropic's steadfast refusal to remove the model's built-in ethical safeguards. These restrictions prevent the AI from being used for "all lawful purposes" without seeking explicit approval from Anthropic for each individual use case.

Anthropic has remained firm on two specific and non-negotiable restrictions:

  • No use of Claude for the mass surveillance of American citizens.
  • No development of fully autonomous weapons systems capable of firing without direct human involvement.

Deadline and Potential Consequences

Tensions reached a critical point during a high-stakes meeting this week, where Defense Secretary Pete Hegseth presented Anthropic CEO Dario Amodei with a stark and unambiguous deadline. Hegseth demanded that Anthropic agree to the Pentagon's terms by 5:00 PM on Friday or face serious repercussions.

The Defense Secretary warned that the administration is prepared to take one of two drastic measures:

  1. Invoke the Defense Production Act, which would legally compel Anthropic to modify Claude AI to meet the military's specific operational requirements.
  2. Formally declare Anthropic a supply chain risk, a move that would severely impact the company's business and standing.

This standoff highlights the escalating conflict between national security imperatives and the ethical boundaries established by leading AI developers, setting a precedent for future government-corporate relations in the technology sector.