US Military Deployed Anthropic AI in Iran Operation Despite Presidential Ban
The United States military reportedly utilized artificial intelligence tools developed by Anthropic in a significant strike operation targeting Iran, mere hours after President Donald Trump issued a directive ordering all federal agencies to immediately cease using the very same technology. This revelation, detailed in a Wall Street Journal report, underscores a profound contradiction within the highest levels of U.S. national security command.
Claude AI Deeply Integrated in Military Operations
According to sources familiar with the matter, U.S. Central Command and other military commands globally have been actively employing Anthropic's Claude AI tool for a range of critical functions. These applications include sophisticated intelligence assessments, precise target identification protocols, and complex simulations of potential battle scenarios. This integration occurred even as underlying tensions between the AI firm and U.S. government officials escalated over the appropriate military applications of such powerful technology.
The strike against Iran, which involved air operations, was executed using systems that were supported and enhanced by the Claude AI platform. This action starkly highlighted how deeply the artificial intelligence systems had already been woven into the fabric of military planning and real-time execution. Military officials have not publicly disclosed the full extent or specific details of Claude's involvement in the operational chain of command.
Trump Declares Anthropic a National Security Threat
The backdrop to this event was a formal declaration by President Trump on Friday, February 27. Following an ongoing dispute regarding military access to Anthropic's technology, the President directed all federal agencies to halt their use of Anthropic's AI tools. He officially labeled the company as posing a national security risk.
This presidential move culminated weeks of intense disagreements between Anthropic's leadership and Pentagon officials. The defense establishment had been pushing for broader, more unrestricted deployment of Claude AI across various military applications. However, Anthropic resisted agreeing to certain terms and conditions proposed by the Pentagon, setting the stage for the confrontation.
Roots of the Broader AI Ethics Feud
The core dispute originates from Anthropic's firm refusal to permit unrestricted military use of its advanced AI models. The company has expressed ethical concerns, particularly regarding applications for fully autonomous weapon systems or capabilities enabling mass surveillance. The Pentagon had reportedly presented Anthropic with an ultimatum: agree to its terms or face severe consequences, including potential removal from lucrative defense contracts.
In response, Anthropic has stated its intention to legally challenge the government's designation of the company as a supply chain risk. The firm argues that its internal ethical safeguards and usage policies are not only reasonable but necessary for responsible AI development.
As the Department of Defense initiates a transition plan to phase out its reliance on Claude AI over the coming months, officials are simultaneously exploring alternative AI providers. Companies like OpenAI are being evaluated to fulfill the military's ongoing and future artificial intelligence requirements for defense and strategic planning.
Implications for AI in National Security
This situation brings into sharp focus the escalating global tensions surrounding the ethical deployment of artificial intelligence in national security and military operations. It raises critical, unanswered questions about governance, oversight, and the future strategic role of AI technology in defense planning, autonomous systems, and international warfare doctrines. The incident between the U.S. and Anthropic serves as a pivotal case study in the complex intersection of cutting-edge technology, corporate ethics, and state power.



