US AI Giants Unite Against Chinese Model Theft via Distillation Attacks
US AI Firms Coordinate to Block Chinese Model Extraction

US AI Giants Unite to Combat Chinese Model Theft via Distillation

In a significant move, three of America's most competitive artificial intelligence companies—OpenAI, Anthropic, and Google—have begun quietly coordinating to share intelligence. Their focus is not on improving their own models but on identifying and blocking attempts by Chinese AI firms to steal their proprietary technologies through a method known as distillation. This collaboration, reported by Bloomberg, operates through the Frontier Model Forum, an industry nonprofit established in 2023 by these companies alongside Microsoft.

An Unusual Alliance Amidst Fierce Rivalry

This arrangement is highly unusual, as these firms typically compete aggressively for the same customers, top engineering talent, and government funding. However, the threat posed by unauthorized model extraction has grown so substantial that they have set aside their rivalries. The effort aims to detect and prevent Chinese entities from siphoning AI capabilities, which could undermine billions in annual profits and compromise national security.

Understanding Distillation and Its Misuse

Distillation is a standard practice in AI development, used to create smaller, more cost-effective versions of existing models. However, it becomes problematic when competitors employ it at an industrial scale to clone models without investing in the underlying research and development. Chinese labs have allegedly used fake accounts and proxy networks to execute these extraction campaigns, blending malicious traffic with ordinary requests to evade detection.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list
  • Anthropic reported that three Chinese labs—DeepSeek, Moonshot, and MiniMax—conducted coordinated attacks against its Claude model, generating over 16 million exchanges through approximately 24,000 fraudulent accounts.
  • MiniMax alone was responsible for more than 13 million of these exchanges, routing traffic through proxy services to bypass bans on commercial access from China.
  • OpenAI informed the US House Select Committee on China that DeepSeek continued similar efforts against American labs using increasingly obfuscated methods.
  • Google's threat intelligence team noted a surge in model extraction attacks on its Gemini model, with one campaign producing over 100,000 prompts designed to replicate its reasoning capabilities.

Broader Implications Beyond Business Losses

The issue extends beyond mere financial damage. Models created through unauthorized distillation often lack the safety guardrails that US labs meticulously implement, such as restrictions on generating instructions for bioweapons or large-scale cyberattacks. If these stripped-down models are integrated into military or intelligence systems, the risks escalate rapidly. US officials, speaking anonymously to Bloomberg, estimate that this practice costs Silicon Valley labs billions in annual profits.

Washington has taken notice, with the Trump administration's AI Action Plan proposing a formal information-sharing center to address these threats. This indicates a growing willingness to structure what is currently an informal arrangement among rivals, highlighting the urgency of safeguarding AI advancements.

The Challenge of Controlling AI's Future

While the information-sharing initiative is a positive step, it underscores a deeper structural problem. Proxy networks can generate thousands of fake accounts faster than any single company can shut them down, putting defenders at a disadvantage. OpenAI advocates for an ecosystem security approach, which involves hardening not just individual labs but also API routers, cloud providers, and payment infrastructure to eliminate weak links.

Moreover, the sophistication of these attacks is increasing. Chinese actors have evolved from simple output scraping to multi-stage pipelines that combine synthetic data generation with reinforcement learning, essentially automating the creation of rival models. For instance, Anthropic observed MiniMax pivoting within 24 hours of a new Claude release to capture capabilities from the latest version. This operational speed necessitates a level of coordination that currently does not exist on a large scale, making the fight to control the next decade of AI increasingly complex.

Pickt after-article banner — collaborative shopping lists app with family illustration