Anthropic Accuses Chinese AI Labs of Illicit Claude Model Use
In a significant development within the artificial intelligence sector, leading AI firm Anthropic has formally accused three of China's largest AI laboratories of illicitly utilizing the outputs from its advanced Claude model to train their own competing systems. The companies named in the allegation are DeepSeek, MiniMax, and Moonshot AI.
Industrial-Scale Distillation Campaigns Alleged
According to a detailed report by Business Insider, Anthropic issued a statement claiming these Chinese AI entities orchestrated what it describes as industrial-scale distillation campaigns. The company asserts that approximately 24,000 fraudulent Claude accounts were created specifically to generate more than 16 million exchanges with its model. This extensive activity reportedly violated both Anthropic's terms of service and specific regional restrictions that govern access to its technology.
For those unfamiliar with the technical process, distillation in AI development refers to a method where a smaller, more efficient model is trained using the outputs generated by a larger, more powerful model. While this practice is legitimate and widely adopted within the industry, Anthropic contends that its Chinese rivals are exploiting the technique to effectively steal sophisticated capabilities. The company argues this allows them to achieve advanced functionality at a fraction of the time and financial cost that would be required for independent, from-scratch development.
Growing Concerns Echo Across U.S. Tech Industry
Anthropic's warning is not an isolated incident but rather part of a growing pattern of concern among American AI leaders. Earlier in January 2025, OpenAI—the creator of ChatGPT—raised similar allegations, suggesting that DeepSeek may have improperly used its model outputs. Furthermore, earlier this month, Google reported a noticeable increase in what it terms distillation attacks targeting its own AI systems.
Anthropic emphasized that these campaigns are becoming increasingly more sophisticated and intense over time. The company is now urging for rapid, coordinated action among various stakeholders, including industry players, policymakers, and the broader global AI community, to address this escalating challenge.
Security Risks and Calls for Export Controls
Beyond the immediate competitive implications, Anthropic has highlighted serious security risks associated with improperly distilled models. The company warned that models trained through such illicit means may lack the crucial safeguards and ethical constraints built into original systems like Claude. This deficiency could potentially enable dangerous misuse scenarios, including, as a stark example, assistance in the development of bioweapons.
Anthropic's CEO, Dario Amodei, has been particularly vocal on this front. He has consistently advocated for stricter U.S. export controls on advanced semiconductor chips. Amodei argues that restricting access to these critical components could serve a dual purpose: limiting both direct, authorized model training by competitors and, importantly, constraining the scale at which illicit distillation operations can be conducted.
Countermeasures and the Evolving Tactics of Rivals
In response to these threats, Anthropic has implemented several defensive strategies. The company has deployed behavioral fingerprinting systems designed to detect anomalous usage patterns indicative of distillation campaigns. Furthermore, Anthropic actively shares intelligence with other AI companies to foster a collaborative defense network and continues to develop new technological countermeasures.
The sophistication of the challenge was underscored by a specific disclosure from Anthropic. The company revealed that during one campaign attributed to MiniMax, the rival lab demonstrated remarkable agility by pivoting its operations within just 24 hours of a new Claude model release. This swift adaptation allowed MiniMax to redirect nearly half of its illicit traffic to immediately begin capturing and utilizing the fresh capabilities of the updated model.
As of the publication of the Business Insider report, representatives for DeepSeek, MiniMax, and Moonshot AI had not responded to requests for comment on these serious allegations. The situation underscores the intensifying technological and geopolitical tensions within the global race for artificial intelligence supremacy.
