OpenAI CEO Sam Altman Admits Pentagon Deal Was 'Rushed' But Defends It as Necessary
OpenAI CEO Sam Altman has publicly acknowledged that the company's decision to enter into a deal with the Pentagon was "rushed," but he insists it was a necessary move to de-escalate growing tensions between the U.S. military and rival AI firm Anthropic. The agreement, announced last week, allows OpenAI's models to be used in classified military networks, coming just hours after Anthropic rejected a similar deal and was labeled a "supply-chain risk" by the Trump administration.
Altman's Defense and Broader Implications
During an AMA session on X (formerly Twitter), Altman admitted that the optics of the deal "don't look good," but he believes OpenAI acted swiftly to prevent a broader confrontation that could have harmed the entire AI industry. "If we are right and this does lead to a de-escalation between the DOW and the industry, we will look like geniuses... If not, we will continue to be characterized as rushed and uncareful," he stated. Altman emphasized that a constructive relationship between government and AI companies is critical over the next couple of years, and he criticized the Pentagon's designation of Anthropic as a supply-chain risk, calling it "a very bad decision" that he hopes will be reversed.
Employee and Public Backlash
The deal between ChatGPT-maker OpenAI and the Pentagon has sparked significant backlash among OpenAI employees. Many had signed a letter supporting Anthropic's refusal to accept Pentagon terms, and protesters chalked graffiti outside OpenAI's San Francisco offices condemning the move. In contrast, Anthropic's headquarters were marked with messages praising its stance. OpenAI staffer Leo Gao publicly questioned whether the contract provided real safeguards, criticizing the "all lawful purposes" clause as little more than "window dressing."
Safeguards and Legal Language Under Scrutiny
OpenAI has mentioned that its contract binds the Pentagon to existing U.S. laws and Department of War policies, which limit the surveillance of citizens and regulate autonomous weapons. Katrina Mulligan, OpenAI's head of national security partnerships, argued that codifying these laws in the contract provides stronger protections. The company also promised technical safeguards, including classifiers to block prompts that violate redlines and fine-tuning models to resist unsafe instructions.
However, legal experts have warned that Pentagon policies can change at will, raising doubts about how durable these safeguards are. Critics have also questioned how OpenAI defines "mass surveillance," noting that U.S. intelligence agencies already purchase commercially available datasets, such as cell phone location data, that could be used to monitor citizens at scale. Mulligan admitted that while the contract prohibits mass domestic surveillance, OpenAI cannot prevent agencies from buying such data independently.
Clash of Philosophies Between AI Giants
OpenAI executives have argued that their layered safeguards—combining technical systems, deployment limits, and expert oversight—are more robust than Anthropic's reliance on contractual language. Boaz Barak, an OpenAI researcher, said Anthropic had "unrealistic expectations" about contract terms. Altman raised a broader question: who should decide how AI is used? He expressed being "terrified of a world where AI companies act like they have more power than the government," but equally fearful of a government that normalizes mass surveillance.



