Anthropic Accuses Chinese AI Labs Of Illicitly Mining Its Claude Model

4 Min Read

Anthropic, a leading AI safety and research company, has accused three prominent Chinese AI firms—DeepSeek, Moonshot AI, and MiniMax—of creating over 24,000 fake accounts to illicitly extract capabilities from its advanced Claude AI model. The activity was allegedly designed to improve the performance of their own competing models.

The San Francisco-based company stated that the Chinese labs generated more than 16 million exchanges with Claude using a technique known as “distillation.” This method allows a company to essentially copy the underlying capabilities of a competitor’s model to train their own, often smaller and more efficient, versions.

The Distillation Dilemma

Distillation is a common practice within AI labs to refine their own models, but its use on a competitor’s product is a contentious issue. OpenAI recently sent a memo to US lawmakers accusing DeepSeek of using the same technique to mimic its products, highlighting a growing trend of what some are calling AI espionage.

According to Anthropic, the attacks specifically targeted Claude’s most advanced features, including agentic reasoning, tool use, and complex coding capabilities. The scale of the alleged data mining varied, with MiniMax alone accounting for 13 million exchanges.

Fueling The US-China AI Chip Debate

These accusations emerge at a critical juncture in the global tech landscape, as the United States continues to debate the stringency of its export controls on advanced AI chips to China. The policy is intended to slow China’s progress in artificial intelligence.

Anthropic argues that the large-scale distillation attacks performed by the Chinese labs would require access to advanced AI chips, reinforcing the case for stricter export controls.

Dmitri Alperovitch, chairman of the Silverado Policy Accelerator and co-founder of CrowdStrike, commented on the situation, stating, “It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of U.S. frontier models. Now we know this for a fact.”

Anthropic also warned that such actions pose significant national security risks. Models trained through illicit distillation are unlikely to retain the safety guardrails built into the original systems, potentially allowing dangerous AI capabilities to proliferate without protections against misuse for activities like developing bioweapons or executing cyberattacks.

Implications For The MENA AI Ecosystem

While this story centres on the US and China, the repercussions are global and carry significant weight for the MENA region’s burgeoning AI sector. For MENA startups and enterprises building on or integrating with large language models, this incident serves as a crucial reminder of the geopolitical tensions underpinning AI development.

The potential for stricter controls on AI technology and chips could impact access to cutting-edge models for developers across the Middle East. Furthermore, it underscores the importance of data security and intellectual property protection, prompting regional companies to scrutinize the origins and training methods of the AI models they employ. This development may also accelerate the push for sovereign AI capabilities and homegrown models within the MENA region to reduce dependency on foreign technology and mitigate supply chain risks.

About Anthropic

Anthropic is an AI safety and research company dedicated to building reliable, interpretable, and steerable AI systems. Founded by former members of OpenAI, the company is focused on large-scale AI models and conducting research on the safety and societal impacts of artificial intelligence. Its flagship product is Claude, a family of large language models.

Source: TechCrunch

Share This Article