Trump Orders Federal Agencies to Halt Use of Anthropic AI Technology
In a dramatic escalation of tensions, US President Donald Trump on Friday issued a directive ordering every federal agency to immediately cease using technology from artificial intelligence startup Anthropic. This move intensifies a bitter standoff over the Pentagon's demand for unrestricted access to the company's advanced Claude AI model.
Six-Month Phase-Out and National Security Designation
President Trump gave federal agencies a six-month deadline to completely phase out Anthropic's tools from critical military and intelligence operations. Following this order, Defense Secretary Pete Hegseth took the unprecedented step of designating Anthropic as a "supply chain risk to national security." This label has historically been reserved for foreign adversaries such as China's Huawei, marking the first time it has been publicly applied to an American company.
The designation carries significant consequences, prohibiting any contractor, supplier, or partner conducting business with the US military from engaging in commercial activities with Anthropic. This creates substantial barriers for the AI company's government-related operations.
Origins of the Crisis: Pentagon's Ultimatum and Anthropic's Refusal
The current crisis began earlier this week when Defense Secretary Hegseth presented Anthropic CEO Dario Amodei with a Friday deadline. The Pentagon demanded that Anthropic allow unrestricted use of its Claude AI model or face termination of the company's $200 million government contract.
Anthropic firmly rejected the ultimatum, maintaining that its AI tools should not be deployed for mass surveillance of American citizens or for fully autonomous weapons systems capable of lethal action without human oversight. While the Pentagon has publicly stated it has "no interest" in using AI for these purposes, Hegseth's own communications told a different story, explicitly demanding "full, unrestricted access" to Anthropic's models.
In a pointed statement, Hegseth accused Anthropic of placing "Silicon Valley ideology above American lives," highlighting the philosophical divide between the military establishment and the AI company's ethical framework.
Anthropic's Legal Challenge and Commercial Reassurances
Anthropic responded with a detailed statement announcing its intention to challenge the "supply chain risk" designation in court. The company declared, "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."
The AI startup characterized the designation as both "unprecedented" and "legally unsound," emphasizing that this regulatory mechanism had never before been applied to an American company in such a manner.
Anthropic moved quickly to reassure its commercial customers, clarifying that the designation, if formally adopted, would only affect the use of Claude AI on Department of War contract work under specific legal provisions. The company stated that individual users and commercial API customers would remain "completely unaffected" by the regulatory action.
Legal Experts Question the Government's Approach
Legal authorities have expressed skepticism about the government's application of the supply chain risk designation. University of Minnesota law professor Alan Rozenshtein observed that the label "clearly was not designed for an American company that has a contract dispute with the government."
Former Trump AI adviser Dean Ball offered even stronger criticism, characterizing the government's action as "attempted corporate murder" that could have devastating consequences for Anthropic's business operations and reputation.
Industry Solidarity and Competing AI Deployments
In a rare display of industry unity, rival OpenAI CEO Sam Altman expressed solidarity with Anthropic's ethical "red lines" and indicated a desire to help de-escalate the conflict. Approximately 70 OpenAI employees and 175 Google staffers signed an open letter supporting Anthropic's stance, warning that the Pentagon appeared to be employing divide-and-conquer tactics against AI companies.
Hours after the 5:01 pm deadline passed, Altman announced that OpenAI had reached its own agreement with the Pentagon to deploy AI models on classified networks while maintaining safety guardrails. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," Altman stated in a social media post.
The Pentagon is also preparing to move forward with Elon Musk's Grok AI on its classified systems, though current and former government officials reportedly consider it an inferior product compared to Anthropic's Claude model.
Potential Operational Consequences
The fallout from this conflict could have far-reaching implications for US intelligence and military operations. Anthropic's Claude AI was the first frontier AI model deployed on the US government's classified networks back in June 2024 and is actively utilized by both the CIA and NSA for sophisticated intelligence analysis.
Forcing Claude AI off government systems could potentially disrupt ongoing intelligence operations and create significant gaps in analytical capabilities that competing AI systems may not immediately fill. The situation represents a critical test of how the US government balances national security imperatives with ethical considerations in artificial intelligence development and deployment.
