Pentagon CTO Reveals Why Anthropic's Claude AI Is a National Security Risk
Pentagon CTO Reveals Why Claude AI Is a National Security Risk

Pentagon CTO Details National Security Concerns Over Anthropic's Claude AI

In a groundbreaking interview, the Defense Department's chief technology officer, Emil Michael, has publicly outlined for the first time the reasons behind the designation of Anthropic's Claude AI models as a national security risk. Speaking on CNBC's Squawk Box, Michael emphasized that the AI's embedded policy preferences pose a significant threat to military operations.

Policy Preferences That Could 'Pollute' the Supply Chain

Michael explained that Claude's different policy preferences, which are ingrained in its constitution during training, have the potential to pollute the Pentagon's supply chain. This contamination could result in warfighters receiving ineffective weapons, body armor, and protection systems. "We can't have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection," Michael stated.

Designation Not Meant to Punish Anthropic

Michael further clarified that labeling Anthropic as a national security risk was not intended as a punitive measure. He noted that the company maintains a huge commercial business, with only a small fraction of its revenue derived from U.S. government contracts. Additionally, he dismissed reports that the Pentagon has actively discouraged companies from using Anthropic outside of defense supply chains, labeling such claims as rumors.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Anthropic's Unprecedented Designation and Legal Response

Anthropic recently became the first American company to be classified as a supply chain risk, a designation typically reserved for foreign adversaries. This move mandates that defense contractors and vendors certify they are not utilizing Claude in Pentagon-related work. In response, Anthropic has filed a lawsuit against the Trump administration, calling the designation unprecedented and unlawful and warning that hundreds of millions of dollars in contracts are at stake.

Breakdown in Negotiations Over AI Limits

The conflict escalated when the Pentagon requested Anthropic to remove hard limits on deploying its AI for fully autonomous weapons and domestic surveillance of American citizens. Anthropic refused, arguing that current AI models are not reliable enough for autonomous weapons and that such use would be dangerous. The company also condemned domestic surveillance as a violation of fundamental rights.

Following the collapse of these negotiations, Defense Secretary Pete Hegseth formally designated Anthropic a national security supply-chain risk. President Donald Trump then directed the government to cease all collaboration with Anthropic, announcing a six-month phase-out for existing contracts.

Pentagon's Stance on AI Flexibility and Safety

The Defense Department has maintained a firm position, asserting that U.S. law—not a private company—should dictate how America defends itself. The military requires full flexibility to use AI for any lawful purpose. The Pentagon has warned that Anthropic's self-imposed restrictions could endanger American lives, underscoring the need for unrestricted AI deployment in defense scenarios.

Transition Plan for Replacing AI Systems

Michael acknowledged that the Pentagon cannot simply rip out Anthropic's technology overnight. A comprehensive transition plan is in place to manage the complexity of replacing AI systems integrated into defense operations. "This is not just Outlook where you could delete it from your desktop," he remarked, highlighting the intricate nature of the process.

Pickt after-article banner — collaborative shopping lists app with family illustration