
Source: Fortune
Summary
Anthropic, a San Francisco-based AI company, has accused three Chinese AI firms of using its Claude chatbot to secretly train rival models. The alleged theft, which occurred on a massive scale, involved the use of fake accounts and proxy services to bypass geofencing and business restrictions. Anthropic claims that the Chinese firms used a technique called “distillation” to extract Claude’s capabilities and improve their own models. The company has urged “rapid, coordinated action” among industry players, policymakers, and the global AI community.
Our Reading
The announcement sounds familiar. Anthropic accuses Chinese firms of using its Claude chatbot to train rival models, while it itself has faced accusations of overreaching in its own data collection practices. The company’s allegations have sparked a broader debate over who sets the rules for the AI industry. Chinese labs are racing to close the performance gap with Western rivals, and Anthropic’s allegations may feed calls for new guardrails.
Anthropic’s allegations come after it settled a $1.5 billion lawsuit with authors over copyright violations. The company has not announced specific lawsuits against the three Chinese firms, but it has signaled that it has cut off known access points and is urging Washington to tighten export controls.
The irony is not lost on critics, who point out that Anthropic itself has been accused of similar behavior. As one commenter noted, “How the turn tables.”
The situation highlights the uncomfortable symmetry at the heart of modern AI, where companies are racing to develop proprietary systems while defending their own data collection practices.
Author: Evan Null








