Former Pentagon Official Warns of US Military Disadvantage if Anthropic AI Access is Cut
A former Pentagon official has issued a stark warning that severing the US military's access to artificial intelligence tools developed by Anthropic could significantly undermine America's strategic military advantage over China. This caution comes as artificial intelligence becomes increasingly critical to modern defense operations, particularly in high-stakes environments.
Pentagon Designates Anthropic as National Security Risk
The US Department of Defense has recently restricted Anthropic's involvement in military projects due to an ongoing dispute concerning the implementation and safeguards of its AI systems. In a significant move, the Pentagon has formally labeled Anthropic, the creator of the Claude AI platform, as a "national security risk"—marking the first time a domestic US company has received this designation.
Michael Horowitz, the former Pentagon official who served in defense policy roles, emphasized that operational experience with advanced technologies represents a crucial differentiator between the US military and potential adversaries like China. "One of the biggest differences between the United States military and China's military is America's extensive operational experience. This just adds to the ledger," Horowitz explained in an interview with Axios.
Potential Impact on Military Strategy and Operations
However, Horowitz expressed concern that the unresolved conflict between Anthropic and Pentagon leadership could obstruct American access to cutting-edge AI technology, potentially affecting military planning and tactical execution. He further cautioned that prohibiting Anthropic's AI tools might diminish the strategic value derived from the US military's hands-on experience with these technologies in actual combat scenarios.
This warning follows reports detailing how Anthropic's AI systems have been deployed in two distinct military contexts:
- Operations connected to Nicolás Maduro's government in Venezuela
- Military activities involving Iran
These real-world deployments provided the US military with invaluable operational experience implementing AI under genuine combat conditions, creating what experts describe as a technological feedback loop that enhances both system development and tactical understanding.
Core Dispute: Implementation Restrictions and Safety Concerns
The fundamental disagreement between Anthropic and the Pentagon centers on when and how the company's AI tools would be integrated into military operations. Defense Department officials maintain that the proposed restrictions could impede critical military functions and potentially endanger personnel safety, arguing that operational delays and reduced system functionality might place troops at unnecessary risk.
During the American Dynamism Summit in Washington this week, Pentagon Chief Technology Officer Emil Michael revealed his concerns about existing AI contracts: "As I started to look at the contracts that had been written during the last administration for the use of AI, I had a whole 'holy cow' moment. [There were] dozens of restrictions, and yet these AI models were baked into some of the most sensitive and important places in the US military, where we do exercise combat power."
Irreversible Decision and Alternative Perspectives
Pentagon leadership has characterized the decision to blacklist Anthropic as "final," with officials stating that the company's relationships with government and military entities are "permanently altered." The Department of Defense has historically utilized AI, automation, and autonomous systems for diverse functions including:
- Intelligence analysis and pattern recognition
- Image and signal processing
- Drone warfare and unmanned systems
- Administrative decision support systems
Despite these applications, military experts note that simulated testing cannot fully replicate the value of deploying technology in actual combat environments.
However, some analysts suggest that losing access to Claude AI might not directly translate to a diminished AI advantage. Steven Feldstein of the Carnegie Endowment for International Peace offered a contrasting perspective: "While ripping out Anthropic and replacing it with a comparable model ... brings some disruption, I think the technology is enough in its infancy that putting in place alternative systems will be sufficient to support DOD's overall military AI objectives."
Broader Context of US-China Technological Competition
This development occurs against the backdrop of escalating technological competition between the United States and China, with both nations investing heavily in artificial intelligence for defense applications. The warning from former Pentagon officials highlights how internal policy disputes might inadvertently affect this global technological race, particularly as AI becomes more integrated into intelligence gathering, decision-making processes, and weapons systems.
The ongoing tensions between the US government and Anthropic primarily revolve around safeguard mechanisms that would limit the company's AI applications in surveillance operations and autonomous weapons platforms. These restrictions reflect broader ethical debates within the defense technology community about appropriate boundaries for artificial intelligence in military contexts.



