Pentagon Accessed OpenAI AI Models Before Official Deal, Report Reveals
US Military Used OpenAI AI Before Deal, Report Says

Pentagon Reportedly Accessed OpenAI AI Models Before Official Agreement

A recent report has revealed that the United States Department of Defense, previously known as the Department of War, gained access to OpenAI's artificial intelligence models well before signing an official deal with the company. According to Wired, this access occurred through Microsoft's Azure OpenAI service in 2023, a time when OpenAI's usage policies explicitly prohibited military applications of its technology.

Early Access Through Microsoft's Platform

Sources indicate that some OpenAI employees discovered the Pentagon was experimenting with Azure OpenAI, a version of OpenAI's models available on Microsoft's cloud platform, prior to the formal agreement signed last week. Microsoft, as OpenAI's largest investor and a long-time contractor with the Pentagon, holds broad rights to commercialize the startup's technology, which facilitated this early access.

This development has sparked internal confusion and criticism within OpenAI, particularly as CEO Sam Altman faces employee scrutiny following the recent military deal. The agreement was pursued after a roughly $200 million Pentagon contract with Anthropic collapsed, prompting staff to demand more details. Altman later described the situation as "sloppy" in a social media post.

Policy Conflicts and Corporate Statements

In response to inquiries, Microsoft spokesperson Frank Shaw stated that Azure OpenAI services became available to the US government in 2023 under Microsoft's own terms of service, not subject to OpenAI's usage policies. However, Microsoft declined to specify when the Department of Defense first accessed the service, noting it was not approved for "top secret" government workloads until 2025.

OpenAI spokesperson Liz Bourgeois emphasized the company's belief in participating in national security discussions to ensure AI is deployed safely and responsibly. She added that OpenAI has been transparent with employees, providing regular updates and channels for questions.

Shifting Stances and Internal Divisions

OpenAI's approach to military collaboration has evolved over time. In January 2024, employees learned of the removal of the general prohibition on military use through a news article, not internal communication. By December 2024, OpenAI announced a partnership with Anduril for unclassified national security issues, while declining Palantir's "FedStart" program due to risk concerns, though it now works with Palantir in other capacities.

The latest Pentagon deal has created internal divisions at OpenAI. Some employees question the reliability of models for battlefield use, while others view the Anduril partnership as a responsible approach. A current researcher noted that OpenAI's strategy involves "measure twice, cut once" for classified deployments, with ongoing employee engagement on national security alignment.

External Concerns and Ethical Implications

Outside observers have raised alarms about the agreement's scope. Charlie Bullock of the Institute for Law and AI highlighted potential legal surveillance issues, such as analyzing Americans' data, leading OpenAI to amend terms. Researcher Noam Brown acknowledged that the original language left "legitimate questions unanswered" regarding AI-enabled surveillance.

Former OpenAI geopolitics head Sarah Shoker warned that civilians in conflict zones are the biggest losers, as opacity in military AI design and policy hinders understanding of its effects in war, describing it as "black boxes all the way down."

At a recent all-hands meeting, Altman told employees that OpenAI does not control how the defense department uses its AI and expressed interest in selling models to NATO, underscoring the complex ethical and commercial landscape of AI in national security.