OpenAI CEO Sam Altman Tells Staff: No Vote on US Military Operations
OpenAI CEO: No Staff Vote on Military Ops Amid Pentagon Deal

OpenAI CEO Sam Altman Delivers Blunt Message to Employees on Military Role

In a stark address to his team this week, OpenAI CEO Sam Altman made it unequivocally clear that employees do not have a say in United States military operations. "So maybe you think the Iran strike was good and the Venezuela invasion was bad," Altman stated during an all-hands meeting on Tuesday, according to a partial transcript reviewed by CNBC. "You don't get to weigh in on that." This directive came just days after OpenAI announced a controversial deal with the Pentagon to deploy its artificial intelligence models on classified networks.

Timing and Backlash Surround OpenAI's Pentagon Agreement

The announcement of OpenAI's partnership with the Department of Defense arrived under highly charged circumstances. It was issued late on a Friday evening, mere hours after rival firm Anthropic was formally blacklisted by the Pentagon as a "supply chain risk to national security" and shortly before US and Israeli strikes on Iran. Defence Secretary Pete Hegseth had labeled Anthropic with this unprecedented designation after the company refused to remove guardrails against AI applications for mass domestic surveillance or fully autonomous weapons.

OpenAI stepped into the void almost immediately, securing its own classified deployment deal before the situation could stabilize. Altman later acknowledged the poor optics, admitting in a post on X over the weekend, "We shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication." During the all-hands meeting, he conceded it appeared "opportunistic and sloppy," as reported by the Wall Street Journal.

The backlash was swift and significant:

  • Some OpenAI employees publicly criticized the move.
  • Dozens had recently signed an open letter supporting Anthropic's ethical red lines.
  • The AI safety community expressed alarm over the implications.
  • Critics highlighted that OpenAI's contract, while including prohibitions on domestic surveillance and autonomous weapons in principle, ultimately defers to existing legal frameworks—frameworks that have been criticized for enabling programs like PRISM.

Altman Draws Clear Lines on Military Decision-Making

Despite the controversy, Altman outlined specific boundaries at the Tuesday meeting. According to a source familiar with the matter who spoke anonymously to CNBC, he informed staff that the Pentagon respects OpenAI's technical expertise and has agreed to let the company build and maintain its preferred safety protocols. Cleared OpenAI engineers will be embedded with government teams, and safety researchers will remain involved in the process.

However, Altman was equally firm that day-to-day military decisions are not within OpenAI's purview. Secretary Pete Hegseth runs those calls—not Sam Altman. He also addressed a competitive reality, noting, "I believe we will hopefully have the best models that will encourage the government to be willing to work with us, even if our safety stack annoys them. But there will be at least one other actor, which I assume will be xAI, which effectively will say 'We'll do whatever you want.'"

Expansion to NATO and Ongoing Developments

OpenAI is already looking beyond the Pentagon, with the Wall Street Journal reporting that Altman told staff the company is exploring a contract to deploy on all NATO classified networks. This move would position OpenAI as a foundational AI provider for the Western military alliance, a significant step beyond Apple's recent NATO clearance for consumer devices.

Meanwhile, reports indicate that Anthropic's Claude AI was used in recent military operations, including the Iran strikes over the weekend and the January capture of ousted Venezuelan leader Nicolás Maduro. This suggests the transition from Anthropic to OpenAI and xAI for classified applications is still evolving.

Altman has reiterated to the Pentagon that Anthropic should not be labeled a supply chain risk and that similar deal terms should be available to all AI companies. Whether this leads to resolution or legal confrontation remains uncertain as OpenAI doubles down on its defense sector involvement.