OpenAI Revises Pentagon Contract to Explicitly Ban Domestic AI Surveillance
In a significant development, OpenAI has amended its recently signed agreement with the US Department of Defense following intense public backlash regarding potential AI-powered surveillance risks. The revised contract now includes clear, explicit language prohibiting the use of OpenAI's tools for domestic surveillance of US persons and nationals.
CEO Sam Altman Takes Personal Stand on Constitutional Principles
OpenAI CEO Sam Altman confirmed the contractual changes on Monday, revealing that the company had been collaborating with Pentagon officials to incorporate more precise language into the agreement. In one of his most personal statements to date, Altman declared his willingness to face legal consequences rather than comply with unconstitutional orders.
"If I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it," Altman wrote in an internal communication that he subsequently made public. This strong ethical stance comes amid growing concerns about the potential misuse of artificial intelligence technologies for surveillance purposes.
Specific Prohibitions and Intelligence Agency Restrictions
The updated contract now explicitly states that OpenAI's tools "shall not be intentionally used for domestic surveillance of US persons and nationals"—including through the acquisition of commercially available personal data such as location history or browsing records. Additionally, the Pentagon has confirmed that OpenAI's services will not be utilized by intelligence agencies like the National Security Agency (NSA) under the current agreement.
Any future use by these agencies would require a separate contract modification, establishing an additional layer of oversight and accountability for potential intelligence applications of OpenAI's technology.
Criticism of Original Agreement's Vague Language
The original agreement, announced just last Friday, had already attracted significant scrutiny from technology policy experts and civil liberties advocates. According to detailed reporting by The Verge, the initial contract did not actually prohibit mass surveillance—it merely required OpenAI to comply with existing laws, many of which have historically been interpreted broadly to authorize extensive domestic spying programs.
Critics pointed out that the NSA's controversial PRISM program and other bulk data collection initiatives had all operated under the same legal framework that OpenAI was citing as a protective measure. OpenAI's former head of policy research, Miles Brundage, expressed skepticism about the company's position, stating bluntly on social media platform X that employees should assume OpenAI had compromised its principles while presenting the agreement as a victory.
Competitive Dynamics with Anthropic and Contractual Differences
The Pentagon deal emerged just hours after the Department of Defense designated Anthropic—OpenAI's primary competitor in the AI sector—as a "supply chain risk to national security." This designation has historically been reserved for foreign adversaries rather than domestic technology companies.
Anthropic had reportedly refused to remove two crucial restrictions from its proposed contract: a prohibition against mass domestic surveillance and a ban on fully autonomous weapons systems capable of killing without human oversight. In contrast, OpenAI agreed to the Pentagon's core requirement of "all lawful use," a concession that Anthropic would not make.
The New York Times reported that Altman and Department of Defense Chief Technology Officer Emil Michael had been engaged in negotiations since Wednesday, reaching a framework agreement within days. This rapid progress was reportedly facilitated by the strong personal relationship between the two executives, which contrasted sharply with Michael's relationship with Anthropic CEO Dario Amodei.
Altman Advocates for Industry-Wide Standards and Democratic Oversight
Despite the appearance of competitive advantage gained by securing the Pentagon contract after Anthropic's negotiations collapsed, Altman has been vocal about his desire to prevent a permanent fracture within the AI industry. In his internal communication, he revealed that he had advised the Department of Defense against designating Anthropic as a supply chain risk and requested that the amended contract terms be made available to all AI companies.
"We do not want the ability to opine on a specific (and legal) military action," Altman wrote separately. "But we do really want the ability to use our expertise to design a safe system."
The OpenAI CEO also acknowledged procedural missteps in the initial announcement, describing the rushed Friday release as appearing "opportunistic and sloppy"—a rare moment of self-criticism from an executive who has navigated some of the most politically sensitive technology agreements in recent Silicon Valley history.
Broader Implications for AI Governance and Military Applications
This contract revision represents a critical moment in the ongoing debate about artificial intelligence governance, particularly regarding military and surveillance applications. The explicit prohibition against domestic surveillance establishes an important precedent for how AI companies can engage with government agencies while maintaining ethical boundaries.
The contrasting approaches between OpenAI and Anthropic highlight the divergent strategies emerging within the AI industry regarding government partnerships and ethical constraints. As artificial intelligence capabilities continue to advance rapidly, these contractual negotiations and public debates will likely shape the regulatory landscape for years to come, influencing how emerging technologies are integrated into national security frameworks while attempting to protect civil liberties and constitutional principles.



