Anthropic Challenges Pentagon's National Security Allegations in Federal Court
AI giant Anthropic has submitted sworn declarations in a California federal court, directly disputing the Pentagon's claim that the company poses an "unacceptable risk to national security." According to a report by TechCrunch, the filings, which accompany Anthropic's reply brief, assert that the government's case is built on technical misunderstandings and mischaracterizations of the company's position during negotiations. A hearing is now scheduled for March 24 before Judge Rita Lin in San Francisco.
Background of the Anthropic vs. Pentagon Dispute
The conflict between Anthropic and the Pentagon originated in late February when US President Donald Trump and Defense Secretary Pete Hegseth announced they were severing ties with the AI giant. This decision followed Anthropic's refusal to permit unrestricted military use of its AI technology. Subsequently, the Pentagon designated Anthropic as a supply-chain risk, marking the first time such a designation has been applied to an American AI company.
Anthropic's Policy Chief Rejects Pentagon's Assertions
In her declaration, Sarah Heck, Anthropic's head of policy and a former White House official, refuted the government's allegation that the company demanded approval over military operations. "At no time during Anthropic's negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role," she wrote. Heck also noted that concerns about Anthropic disabling its technology mid-operation were never discussed during talks and only appeared in the Pentagon's court filings. She highlighted a March 4 email from Pentagon Under Secretary Emil Michael to Anthropic CEO Dario Amodei, in which Michael stated the two sides were "very close" on issues such as autonomous weapons and mass surveillance—the same issues later cited by the government as evidence of a security threat.
Technical and Security Arguments Presented by Anthropic
Alongside Heck's declaration, Anthropic's head of public sector, Thiyagu Ramasamy, filed a declaration disputing claims that the company could interfere with military operations. He explained that once Anthropic's Claude AI is deployed within government-secured, "air-gapped" systems, the company has no access, no kill switch, and no backdoor. Any updates require Pentagon approval and installation. Ramasamy further argued that Anthropic's cleared personnel—vetted through U.S. government security processes—make it unique among AI firms operating in classified environments.
Legal and Constitutional Dimensions of the Case
Anthropic's lawsuit contends that the Pentagon's designation amounts to retaliation for the company's public stance on AI safety, violating its First Amendment rights. The government, in a 40-page filing earlier this week, rejected this framing, insisting the designation was a straightforward national security decision and not punishment for Anthropic's views.



