US Department of Justice Intensifies Legal Confrontation with Anthropic Over AI Military Access
The United States Department of Justice has filed a comprehensive 40-page legal reply to Anthropic's ongoing lawsuit, presenting a forceful argument that the artificial intelligence startup's refusal to permit unrestricted military utilization of its Claude models constitutes a straightforward commercial disagreement rather than a constitutional free speech concern. The filing, submitted on Tuesday, asserts that the Pentagon acted entirely within its legal authority when it severed ties with the company.
Government Positions Anthropic as Direct National Security Concern
In a significant escalation of rhetoric, the federal government has characterized Anthropic as an "unacceptable" and "substantial" national security risk. The Department of Justice has formally requested that the presiding federal judge reject the company's petition for a preliminary injunction, which would temporarily suspend the Pentagon's supply chain risk designation while litigation proceeds.
The government's core legal argument extends beyond conventional procurement principles. Department of Justice attorneys contend that because artificial intelligence systems remain "acutely vulnerable to manipulation," maintaining Anthropic's access to Department of Defense warfighting infrastructure presents serious operational dangers. Specifically, the filing warns that company personnel could potentially "sabotage, maliciously introduce unwanted function, or otherwise subvert" its own AI models during active combat operations if Anthropic believed its internal ethical boundaries were being violated.
Pentagon Reveals Critical Dependence on Claude AI Systems
The government's filing discloses a crucial operational reality: Claude currently represents the only AI model authorized for use on the Department of Defense's classified systems, with high-intensity combat operations actively underway. This creates what the Department of Justice describes as an inability to "simply flip a switch" for immediate replacement, adding urgency to the government's position.
This framing represents a substantial strategic escalation, effectively repositioning Anthropic from merely an inflexible commercial vendor to a potential active threat within military contexts. The government suggests that Anthropic's continuing technical capacity to update and refine its own AI models renders the company inherently untrustworthy during wartime operations.
Contract Dispute Origins and Legal Proceedings
The legal confrontation originated from the collapse of Anthropic's $200 million contract with the Pentagon, following failed negotiations regarding usage terms. Anthropic had sought explicit contractual guarantees that its Claude AI would not be deployed for mass domestic surveillance or fully autonomous lethal weapons systems. The Pentagon countered that private companies should not dictate military tool utilization, demanding instead "all lawful use" access without restrictions.
When negotiations reached an impasse, Defense Secretary Pete Hegseth designated Anthropic as a supply chain risk—a classification previously reserved primarily for foreign adversaries—effectively prohibiting the company from federal contracting opportunities. Anthropic initiated legal action on March 9, filing parallel cases in both the Northern District of California and the DC Circuit Court of Appeals.
The AI startup contends that the supply chain risk designation represents unconstitutional retaliation for its safety policies, warning that more than 100 enterprise customers might abandon the company as a consequence, potentially resulting in billions of dollars in financial losses.
Government Counters Financial Claims and Reveals Alternative AI Plans
The Department of Justice has challenged Anthropic's financial concerns, describing the projected losses as "speculative" and arguing they could be addressed through standard contractual remedies rather than requiring emergency judicial intervention.
In a revealing disclosure, the Pentagon confirmed in its filing that it is actively pursuing alternative artificial intelligence solutions from Google, OpenAI, and xAI as potential replacements for Claude systems. According to the Department of Defense's chief digital and AI officer, engineering work on these alternative platforms has already commenced.
Upcoming Legal Deadlines and Proceedings
Anthropic faces a Friday deadline to submit its formal counter-response to the government's latest filing. The critical preliminary injunction hearing remains scheduled for March 24 in federal court located in San Francisco, where both parties will present their arguments before judicial determination.
