Palantir CEO Defends Pentagon's AI Use, Anthropic Conflict Escalates to Court
Palantir CEO Defends Pentagon AI, Anthropic Sues US Government

Palantir CEO Addresses AI Surveillance Concerns Amid Pentagon-Anthropic Dispute

In a recent interview with Fortune, Palantir CEO Alex Karp delivered a pointed message to Anthropic and supporters of its CEO Dario Amodei, emphasizing that the U.S. Department of Defense is not utilizing artificial intelligence for domestic mass surveillance of American citizens. Karp stated unequivocally that, to his knowledge, the Pentagon has no such plans, aiming to clarify misconceptions surrounding military AI applications.

Palantir's Role as Key Defense Software Provider

Palantir, a Miami-based data analytics and AI platform, stands as one of America's foremost defense software companies, serving as a primary software provider for the Department of Defense. The company's technology stack is the main conduit through which the Pentagon accesses Anthropic's large language model, Claude. Speaking at Palantir's biannual AIP conference, Karp highlighted, "We are legitimately still in the middle of all this. It's our stack that runs the LLMs." This partnership, established in 2024, allows Anthropic's AI to be offered to the DoD via Palantir, with Anthropic also engaging directly with the Pentagon to develop a tailored version of its technology for defense purposes.

Karp aligned with the Pentagon in its ongoing conflict with Anthropic, noting, "Without commenting on internal dialogs, there was never a sense that these products would be used domestically. The Department of War is not planning to use these products domestically. That's a completely different kettle of fish... The terms the Department of War wants are completely focused on non-American citizens in a war context."

Anthropic vs. Pentagon: Escalating Tensions and Legal Battle

For over a year, Anthropic's Claude has been the preferred AI model for the U.S. government, reportedly the first frontier system cleared for classified use, with applications including intelligence operations such as the capture of Venezuelan President Nicolás Maduro in January. However, relations deteriorated in recent weeks, culminating in the Trump Administration designating Anthropic as a supply-chain risk to national security on February 27—a unprecedented move against an American company.

The dispute originated from guardrails Anthropic sought to impose on the military's use of Claude, the sole AI model authorized on classified networks. The company demanded assurances that Claude would not be employed for mass surveillance of U.S. citizens or in lethal autonomous weapons, while the Pentagon insisted on availability for "all lawful use." In a blog post, Anthropic justified its stance: "We held to our exceptions for two reasons. First, we do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."

Anthropic's Apology and Subsequent Lawsuit

Following the designation, Anthropic apologized for a leaked internal memo that criticized OpenAI and its CEO for accepting a Pentagon deal shortly after Anthropic's rejection. The company clarified it did not leak the memo and expressed regret for its tone, calling it an outdated assessment. Despite this, on March 9, Anthropic escalated the conflict by filing a lawsuit against the U.S. government in the U.S. District Court for the Northern District of California.

The 48-page lawsuit challenges the Trump administration's actions as "unprecedented and unlawful," arguing that the Constitution prohibits the government from punishing a company for protected speech. The filing asserts, "No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation." This legal move marks a significant escalation in the battle over AI ethics and national security, highlighting the growing tensions between tech firms and government agencies over the responsible deployment of artificial intelligence in defense contexts.