Prominent technology investor David Sacks has issued a stark warning that the United States risks losing the global artificial intelligence competition to China, primarily due to excessive negativity and overregulation surrounding the technology. During a high-profile conversation with Salesforce CEO Marc Benioff at the World Economic Forum in Davos, Sacks, who was appointed by US President Donald Trump as his AI and crypto czar, cautioned against what he termed the "AI doomer mindset."
The AI Doomer Mindset and Its Consequences
This pervasive belief that unconstrained AI development will inevitably harm humanity or lead to societal collapse represents what Sacks described as a "self-inflicted injury" for America. He expressed deep concern that a collective "fit of pessimism" could result in overly restrictive policies that stifle innovation. As a prime example, he pointed to Senator Bernie Sanders' recent call for a moratorium on data centre construction, which Sacks views as counterproductive to maintaining technological leadership.
"If we have 1,200 different AI laws in the states, you know, clamping down on innovation, I worry that we could lose the AI race," Sacks told Benioff during their Davos discussion. He highlighted that "we generally see that in Western countries, the AI optimism is a lot lower," particularly in the United States, compared to other regions worldwide. This observation is supported by data from the 2025 Edelman Trust Barometer, which reveals significant disparities in public sentiment toward artificial intelligence.
Trump Administration's Deregulatory Stance
The debate underscores growing tensions between Silicon Valley leaders who advocate for rapid AI advancement and policymakers calling for stronger safety measures. President Trump has adopted a distinctly deregulatory approach that his administration's technology advisor argues is essential for maintaining American competitiveness against Chinese rivals. Since taking office, Trump has championed a hands-off philosophy toward AI development.
In an AI Action Plan released last summer, the administration eliminated numerous rules governing AI research, marking a significant departure from Biden-era policies that mandated federal oversight of AI governance. Trump further advanced this agenda in December with an executive order that reduced state-level protections for AI development. The order explicitly stated that global AI dominance would require American companies to be "free to innovate without cumbersome regulation."
State-Level Regulatory Battles
Sacks has repeatedly echoed the administration's opposition to state-level regulations, including during his Davos appearances. In a recent interview with CNBC, he criticized California's proposed billionaire wealth tax—a one-time, 5% levy on total wealth for residents worth more than $1 billion, which will appear on the ballot in November.
"It's not a one-time, it's a first time. And if they get away with it, there'll be a second time and a third time. And this will be the beginning of something new and different in this country," warned Sacks, who recently relocated from California to Texas. He is among several wealthy California residents who have criticized the proposal and decided to leave the state, including Google founders Larry Page and Sergey Brin. Speaking to CNBC, Sacks characterized the plan as a potentially "scary direction" of state overreach.
Diverging Perspectives on AI Governance
Despite Silicon Valley leaders' departures and some AI companies welcoming the Trump administration's reduced regulations, the unrestricted approach to AI development has faced substantial criticism as research advances at an unprecedented pace. Concerns about automation-driven job losses, potential financial market collapse, and the proliferation of potentially unsafe AI models have tempered some of the stock market's initial AI enthusiasm.
Even within the AI industry itself, leaders express reservations. In November, Anthropic CEO Dario Amodei stated on 60 Minutes that he felt "deeply uncomfortable" with how AI companies were being tasked with self-governance, expressing preference for "responsible and thoughtful regulation of the technology."
The China Factor in AI Competition
Proponents of reduced regulation often justify their position as necessary to keep pace with AI competitors in China. Chinese AI research is rapidly closing the gap with the United States, with some models—particularly those developed by Hangzhou-based startup DeepSeek—matching or even exceeding Western models' performance in specific reasoning tasks.
During his conversation with Benioff, Sacks cited recent research from Stanford University's Institute for Human-Centered Artificial Intelligence, published in 2025, which examined global AI optimism rates. The study found remarkably high optimism in China, where 83% of survey respondents viewed AI as more beneficial than harmful. By comparison, only 39% of Americans shared this optimistic outlook.
Bipartisan Concerns About AI Risks
While figures like Trump and Sacks advocate for an AI approach free of restraints, pessimism about artificial intelligence is not strictly a partisan issue in the United States. In December, Florida Governor Ron DeSantis, a former Republican presidential hopeful, also called for more limits on data centre construction. Last week, a bipartisan House committee heard testimonies on the impact of AI in K–12 education.
Although some Republican committee members cautioned against hindering innovation through additional regulation, there was broad consensus about the potential risks of exposing children to artificial intelligence. This demonstrates that concerns about AI's societal impact transcend traditional political divisions, creating complex challenges for policymakers attempting to balance innovation with precaution.
The ongoing debate highlights fundamental questions about how nations can foster technological advancement while addressing legitimate safety concerns—a balancing act that will likely determine which country emerges as the global leader in artificial intelligence.