In a world rapidly being reshaped by artificial intelligence, the two nations at the forefront of this technological revolution—the United States and China—are locked in a fierce competition for supremacy. Yet, amidst this race, a former top American security official is issuing a stark warning: the two superpowers must urgently engage in serious, sustained diplomacy to manage the profound risks posed by AI, even as they vie for the lead.
The Foundation: A Hard-Won Diplomatic Start
The call for dialogue comes from Jake Sullivan, who served as the US National Security Adviser to President Joe Biden from 2021 to 2025. Sullivan points to a critical milestone: in November 2024, US President Joe Biden and Chinese President Xi Jinping issued their first substantive joint statement specifically addressing AI-related national security threats. The core of that agreement was a shared belief in “the need to maintain human control over the decision to use nuclear weapons.”
While this principle might seem obvious, Sullivan emphasizes that achieving it was far from simple. It required over a year of negotiations. China's inherent skepticism towards US proposals on risk reduction, coupled with Russia's opposition to similar language in other forums, made progress uncertain. The final statement, however, proved that constructive risk management is possible even amidst intense rivalry.
This built on an earlier meeting in Geneva in 2024, where diplomats and experts from both nations held extended talks dedicated to AI risks. Though no major agreements emerged, the meeting itself was a significant step, allowing both sides to identify critical risk areas needing further work.
The Mounting Risks That Demand Cooperation
Why is this diplomacy so urgent? Sullivan outlines a spectrum of escalating dangers that extend far beyond traditional state-to-state competition. As AI capabilities become more advanced and accessible, the threats multiply:
- Non-State Actors: Terrorist organizations could harness AI for devastating cyberattacks on critical infrastructure, create novel bioweapons, launch destabilizing disinformation campaigns, or deploy AI-powered lethal drones.
- Military Escalation: As the US and Chinese militaries integrate AI to speed up decision-making, the risk of AI systems inadvertently triggering a conflict or causing catastrophic escalation grows.
- Financial System Vulnerability: AI-driven trading, central to global banking, could trigger a market crash without proper safeguards.
- Existential Threats: Looking ahead, a powerful, misaligned AI system pursuing unintended goals could pose a grave threat to humanity itself.
“As the world’s only AI superpowers, the US and China need to engage one another directly to address these and other dangers,” Sullivan asserts.
Managed Competition: The Path Forward
Sullivan is clear that engagement does not mean an end to competition. He cites China's extreme export controls on rare earth minerals in late 2025—vital for AI chip production—as evidence of how sharp the rivalry has become. As National Security Adviser, his focus was on ensuring US leadership so the technology works for, not against, American interests.
Yet, he argues, it is precisely because of this intensity that diplomacy is essential. It would be “deeply irresponsible” to race ahead without discussing risks or the opportunities AI presents for global challenges like climate change and public health.
While informal “Track 2” dialogues involving academics and business leaders are valuable, Sullivan stresses there is no substitute for direct government-to-government engagement. The breathtaking speed of AI advancement means this cannot wait.
Why AI is Different from Nuclear Arms Control
Many compare the challenge to Cold War-era nuclear arms control, but Sullivan highlights key differences that demand more innovative approaches:
- Verification is Extremely Difficult: Counting missiles is one thing; counting and understanding the capabilities of algorithms is entirely another.
- The Dual-Use Dilemma is Acute: The same AI model that accelerates scientific discovery can also be weaponized, blurring the line between civilian and military use.
- Threats are Broader: Risks come not just from states, but from non-state actors and the fundamental problem of AI misalignment.
- Private Sector Drives Development: In the US, AI advancement is led by multiple competing companies, requiring a wider range of actors in risk-mitigation talks.
- Uncertain Trajectory: Experts disagree wildly on how fast AI will evolve, unlike the more predictable physics of nuclear detonations.
Sullivan concludes that managing AI risks is uncharted territory where progress will be neither swift nor easy. “The US and China need to get started,” he urges. The nuclear arms control framework took decades to build. With AI, the world is at the opening stages of a similarly ambitious but far more complex endeavour, making immediate risk-reduction efforts all the more critical.