SP
BravenNow
Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports
| USA | technology | ✓ Verified - techcrunch.com

Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports

#Anthropic #Chinese AI #Claude #Distillation #AI Chip Exports #DeepSeek #Moonshot AI #MiniMax

📌 Key Takeaways

  • Anthropic accused three Chinese AI labs of using 24,000+ fake accounts to extract Claude's capabilities
  • The companies used 'distillation' techniques to improve their own models through millions of exchanges
  • The accusations come amid debates over U.S. AI chip export controls to China
  • Anthropic warns that such attacks could create national security risks by removing safety safeguards

📖 Full Retelling

Anthropic accused three Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of using over 24,000 fake accounts to mine its Claude AI model's capabilities in the United States as U.S. officials debate stricter export controls on advanced AI chips aimed at slowing China's artificial intelligence progress. The American AI firm alleges that the Chinese labs generated more than 16 million exchanges with Claude through these accounts using a technique called 'distillation,' which allowed them to extract Claude's most differentiated capabilities including agentic reasoning, tool use, and coding. Each company operated on a different scale: DeepSeek accounted for over 150,000 exchanges targeting foundational logic and alignment, Moonshot AI conducted 3.4 million exchanges focused on agentic reasoning and coding, while MiniMax performed 13 million exchanges concentrating on agentic coding and tool use. The accusations follow similar concerns raised by OpenAI, which sent a memo to House lawmakers earlier this month accusing DeepSeek of using distillation to mimic its products. Anthropic has called for a coordinated response across the AI industry, cloud providers, and policymakers to address what it describes as both a competitive threat and national security concern, noting that models built through illicit distillation may lack safeguards against dangerous applications like bioweapons development or cyber attacks.

🏷️ Themes

AI Competition, Technology Security, International Trade, National Security

📚 Related People & Topics

Distillation

Distillation

Method of separating mixtures

Distillation, also classical distillation, is the process of separating the component substances of a liquid mixture of two or more chemically discrete substances; the separation process is realized by way of the selective boiling of the mixture and the condensation of the vapors in a still. Distill...

View Profile → Wikipedia ↗
Anthropic

Anthropic

American artificial intelligence research company

# Anthropic PBC **Anthropic PBC** is an American artificial intelligence (AI) safety and research company headquartered in San Francisco, California. Established as a public-benefit corporation, the organization focuses on the development of frontier artificial intelligence systems with a primary e...

View Profile → Wikipedia ↗

Claude

Topics referred to by the same term

Claude most commonly refers to: Claude (language model), a family of large language models developed by Anthropic Claude Lorrain (c.

View Profile → Wikipedia ↗

Entity Intersection Graph

No entity connections available yet for this article.

Original Source
Anthropic is accusing three Chinese AI companies of setting up more than 24,000 fake accounts with its Claude AI model to improve their own models. The labs — DeepSeek , Moonshot AI , and MiniMax — allegedly generated more than 16 million exchanges with Claude through those accounts using a technique called “distillation.” Anthropic said the labs “targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.” The accusations come amid debates over how strictly to enforce export controls on advanced AI chips, a policy aimed at curbing China’s AI development. Distillation is a common training method that AI labs use on their own models to create smaller, cheaper versions, but competitors can use it to essentially copy the homework of other labs. OpenAI sent a memo to House lawmakers earlier this month accusing DeepSeek of using distillation to mimic its products. DeepSeek first made waves a year ago when it released its open-source R1 reasoning model that nearly matched American frontier labs in performance at a fraction of the cost. DeepSeek is expected to soon release DeepSeek V4, its latest model, which reportedly can outperform Anthropic’s Claude and OpenAI’s ChatGPT in coding. The scale of each attack differed in scope. Anthropic tracked more than 150,000 exchanges from DeepSeek that seemed aimed at improving foundational logic and alignment, specifically around censor-ship safe alternatives to policy-sensitive queries. Moonshot AI had more than 3.4 million exchanges targeting agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. Last month, the firm released a new open source model Kimi K2.5 and a coding agent. Techcrunch event Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the...
Read full article at source

Source

techcrunch.com

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine