Sam Altman's Alleged China AGI Claims Were Unfounded Sales Pitch: Investigation
A major New Yorker investigation published this week has uncovered startling revelations about Sam Altman's efforts to secure government funding for OpenAI. According to the report, Altman repeatedly told US intelligence officials in 2017 that China had launched an "AGI Manhattan Project" and that OpenAI needed billions in dollars to keep pace with this perceived threat.
The Unsubstantiated China Claim That Opened Wallets
During a summer 2017 meeting with a room full of US intelligence officials, Sam Altman made a dramatic claim that would typically set off alarms in Washington's national security circles. He asserted that China had initiated an "AGI Manhattan Project" - a reference to the secret World War II nuclear weapons program - and argued that OpenAI required substantial government funding to compete.
When pressed for evidence supporting this alarming claim, Altman reportedly responded, "I've heard things." He promised to follow up with proof, but that evidence never materialized. The official who investigated the claim told The New Yorker there was no evidence that such a Chinese project actually existed. In his assessment, the China threat narrative was "just being used as a sales pitch" to unlock government funding.
The Shifting Manhattan Project Analogy
This was not an isolated incident. The investigation, based on interviews with more than 100 people and a collection of internal documents, reveals that Altman made similar claims about China's AGI ambitions across multiple meetings with intelligence officials. He consistently employed the same nuclear-age analogy he had been using since co-founding OpenAI in 2015 - that building Artificial General Intelligence represented the new Manhattan Project, with civilization-level stakes at risk.
The New Yorker's reporting indicates that Altman adapted this analogy depending on his audience. With national security officials, it served as a scare tactic to secure funding. With safety-conscious researchers, it became a call for caution and international coordination. While the framing shifted according to the audience, the underlying request for financial support remained constant.
From Washington Briefings to International Funding Pursuits
The pattern extended well beyond Washington intelligence briefings. According to the investigation, Altman pursued funding from the Saudi sovereign wealth fund shortly after the murder of journalist Jamal Khashoggi. He reportedly asked advisers whether he could "get away with" accepting Saudi money despite the negative optics. Eventually, he partnered with Sheikh Tahnoon of the United Arab Emirates - described by The New Yorker as the nation's spymaster - to develop a massive data center campus in Abu Dhabi.
This Abu Dhabi project is reportedly seven times the size of New York's Central Park, representing a significant international expansion for OpenAI's infrastructure. The investigation details how Altman's former colleagues have developed theories about his approach to communication and funding.
Internal Concerns from Former Colleagues
Dario Amodei, who left OpenAI to found competing AI company Anthropic, documented his observations across 200 pages of private notes. Ilya Sutskever, OpenAI's former chief scientist, reached similar conclusions in secret memos compiled before Altman's brief firing in 2023. Both men arrived at nearly identical assessments: Altman tells people what they want to hear, and the consequences emerge later.
The investigation paints a complex picture of Altman's fundraising strategies, suggesting a pattern of leveraging geopolitical anxieties to secure financial support while maintaining flexible narratives about AI development risks and opportunities. These revelations come at a time when global competition in artificial intelligence development has intensified, with nations and corporations investing billions in what many consider the defining technological race of the 21st century.



