Anduril Founder Palmer Luckey Advocates for Government Control Over AI in National Security
Palmer Luckey: Governments, Not Corporations, Should Control AI Use

Palmer Luckey Calls for Government Oversight in AI Deployment for National Security

In a recent interview with the New York Post, Palmer Luckey, the founder of defense technology company Anduril, has articulated a clear stance on the contentious issue of who should control the use of artificial intelligence in national security contexts. According to a report by Fortune, Luckey emphasized that governments, rather than corporations, must be the primary decision-makers in how AI is deployed for defense purposes.

Luckey's Argument Against Corporate Control

Luckey argued that allowing tech executives to determine who they sell AI systems to risks undermining democratic principles. "We need to stick to a position that this is in the hands of the people," he stated. He further elaborated that any defense company operating beyond the directives of legislators and elected leaders effectively rejects the democratic experiment, advocating instead for what he termed a "corporatocracy."

He added, "In all cases, whoever the United States government tells me that I can and cannot sell to — to have any other position is to fall further into ... basically corporate executives having de facto control over US foreign policy." This perspective highlights a growing divide in Silicon Valley, where tech firms grapple with whether to retain the right to refuse government contracts based on ethical concerns or defer to elected officials.

Anthropic's Clash with the Pentagon

Luckey's comments come amid escalating tensions between AI giant Anthropic and the US Department of Defense. Recently, Anthropic CEO Dario Amodei refused to allow the Pentagon unrestricted use of its AI systems for mass surveillance or fully autonomous weapons, prompting the agency to label the company a "supply-chain risk." This designation, typically reserved for foreign adversarial firms like Huawei, has sparked controversy.

Amodei asserted that Anthropic would challenge the move in court, insisting the Pentagon's requests crossed ethical lines. "We cannot in good conscience accede to their request," he said in a press release. The dispute originated from Anthropic's refusal to lift safeguards and permit military use for "all lawful purposes," despite its AI model, Claude, being the only one running in the military's classified systems.

Background on Anthropic's AI Usage

For context, Anthropic's Claude was reportedly used during operations such as the capture of Venezuela's Nicolás Maduro, through a partnership with Palantir, and during recent Iran strikes. The Pentagon issued an ultimatum to Anthropic last week before declaring it a supply-chain risk, which the company's CEO called "retaliatory" and "punitive." He noted that former US President Donald Trump disliked the company for not offering "dictator-style praise." Anthropic is currently in talks with the Department of Defense regarding the use of its AI models by the US military.

This ongoing debate underscores a critical juncture in the tech industry, where ethical considerations clash with national security imperatives, and leaders like Luckey advocate for a government-led approach to ensure democratic accountability.