OpenAI CEO Sam Altman Criticizes Anthropic's Stance in Government AI Dispute
In a revealing internal memo, OpenAI CEO Sam Altman reportedly informed his employees that rival AI company Anthropic was "overreacting" in its ongoing dispute with the US Department of War. This communication was sent just hours before OpenAI finalized a significant agreement with the Pentagon to deploy its artificial intelligence models on classified government networks.
Details of the Internal Memo and Pentagon Agreement
According to documents seen by the Wall Street Journal, Altman's message outlined that OpenAI was actively collaborating with the US Department of War to explore whether its AI models could function effectively within classified environments while maintaining established safety protocols. This precise issue had previously stalled Anthropic's engagement with government agencies, creating a broader industry deadlock regarding military applications of artificial intelligence.
The timing of this memo proved particularly significant as it preceded OpenAI's official announcement of its Pentagon partnership. This development occurred amidst heightened tensions following President Donald Trump's directive for federal agencies to cease using Anthropic's technology, with Defense Secretary Pete Hegseth labeling the company a "supply-chain risk to national security."
Altman's Specific Statements and Strategic Approach
In his detailed communication to staff, Altman emphasized that OpenAI was pursuing a contract "that allows our models to be deployed in classified environments and that fits with our principles." He specifically noted that the company would request the agreement to exclude any applications that are unlawful or unsuitable for cloud deployments, explicitly mentioning domestic surveillance and autonomous offensive weapons as prohibited uses.
Altman expressed his intention to help de-escalate tensions between the government and AI companies, hoping to prevent outcomes that might establish challenging precedents for the entire industry. "We would like to try to help de-escalate things," he wrote, acknowledging both Anthropic's legitimate concerns about AI safety and the government's position on national security oversight.
OpenAI's Core Principles and Technical Safeguards
The OpenAI CEO reiterated the company's longstanding position that artificial intelligence should never be utilized for mass surveillance or autonomous lethal weapons, and that humans must remain involved in high-stakes automated decisions. "These are our main red lines," Altman stated clearly in the memo.
He further explained that the fundamental disagreement with Anthropic centered more on governance and control issues rather than how AI systems would ultimately be deployed. "We believe this dispute isn't about how AI will be used, but about control," Altman wrote. "We believe that a private US company cannot be more powerful than the democratically-elected US government, although companies can have lots of input and influence. Democracy is messy, but we are committed to it."
Implementation Strategy and Industry Implications
According to the internal document, OpenAI believes these ethical boundaries can be maintained through comprehensive technical safeguards. These include restricting AI models to cloud-based environments rather than edge deployments, which could substantially reduce risks associated with potential autonomous weapons applications.
The company also plans to support researchers in obtaining necessary security clearances so they can effectively advise government agencies on system limitations and potential risks. "We would also build technical safeguards and deploy personnel to partner with the government to ensure things are working correctly," Altman highlighted, adding that OpenAI would offer similar services to allied nations and hoped this approach might serve as a viable path for other AI laboratories.
Employee Reactions and Industry Context
Before Altman circulated his memo, some OpenAI employees had publicly expressed solidarity with Anthropic on social media platforms. Approximately 70 current staff members signed an open letter titled "We Will Not Be Divided," which sought to build "shared understanding and solidarity in the face of this pressure" from the Pentagon.
In a CNBC interview conducted before announcing the Pentagon agreement, Altman acknowledged his differences with Anthropic while expressing some trust in their intentions. "For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety," he stated.
Background and Previous Government Contracts
This recent development follows OpenAI's receipt of a $200 million contract from the Department of War last year, which enabled the agency to utilize OpenAI's models for non-classified purposes. Meanwhile, Anthropic had previously become the first AI laboratory to successfully integrate its systems into mission workflows on classified government networks, establishing its position in this sensitive sector before the current dispute emerged.
The contrasting approaches of these two leading AI companies highlight the complex ethical, technical, and political considerations surrounding artificial intelligence deployment in national security contexts, with significant implications for the entire technology industry's relationship with government agencies.
