AI Hustle: Make Money from AI and ChatGPT, Midjourney, NVIDIA, Anthropic, OpenAI
Episode: Anthropic Accuses Chinese AI Labs of Claude Mining
Hosts: Jaeden Schafer & Jamie McCauley
Date: March 3, 2026
Overview
This episode centers on recent accusations from Anthropic, the AI company behind Claude, that several Chinese AI labs have been "mining" Claude by creating tens of thousands of accounts to distill its responses and enhance their own AI models. The hosts break down what distillation is, the broader implications for intellectual property in AI, trends toward open-source/local AI models, and the significant “AI drama” these accusations have stirred. The episode explores the technical, business, and ethical layers of this international spat, while also tying it all back to opportunities for indie entrepreneurs and the future of the AI model landscape.
Key Discussion Points & Insights
1. The Nature of Anthropic's Accusations
- Anthropic has publicly accused three Chinese AI labs—Deepseek, Moonshot AI, and Minimax—of creating over 24,000 fake Claude accounts to gather data from more than 16 million interactions and train their own models.
- The main technique in question is model distillation: using an established model’s output to train a new model to mimic the original’s style, tone, and capabilities.
- There’s a sense of irony, as Anthropic itself trained on vast, copyright-infringing datasets in the past, leading some to accuse it of hypocrisy.
- Notable Quote:
"Call the kettle black all you want, Anthropic. Like you stole the data to make your model and now they’re stealing your data. I don’t know, whatever. I’m not really trying to apologize for the, the Chinese AI companies but at the, I don’t know, it’s just, it’s pretty funny." (Host 2, 04:12)
2. Legal, Ethical, and Business Implications
- The tactic of model distillation isn’t exactly illegal—breaking Anthropic’s terms of service, perhaps, but hard to prosecute internationally.
- Publicly “shaming” competitors is currently the only recourse for companies like Anthropic, but it highlights how thin the moat is for maintaining unique AI capabilities.
- The hosts observe that such distillation and imitation are commonplace across the AI industry, citing how OpenAI, Google, and others have been accused (and perhaps have themselves utilized) similar practices.
- Notable Quote:
"Doesn't everyone kind of do this? ...There’s a whole bunch of copycats. ...I feel like they’re all just kind of building off each other and I would even think training off each other. That would be my guess. But maybe that’s just an unspoken thing." (Host 1, 06:33)
3. Deep Dive: What Each Chinese AI Lab is Mining For
Deepseek
- Ran ~150,000 conversations with Claude.
- Focus: Improving foundational logic and alignment, especially around "censorship" and providing "safe alternatives to policy sensitive queries."
- In essence, they're learning from Anthropic's subtler, less visible censorship methods to improve the acceptability of their models outside China.
- Example from Host:
"Nobody wants to use a censored model… you can’t have it blatantly censoring negative things about the, you know, president of China. Maybe that works in China, doesn’t work in America." (Host 2, 11:40)
Moonshot AI
- Ran over 3.4 million conversations.
- Focus: Agentic reasoning, tool use, coding, data analysis, agent development, and computer vision.
- They’re especially interested in Claude’s abilities to interact with computers and perform operations on user devices.
Minimax
- The most aggressive, with 13 million exchanges.
- Focus: Coding, tool use, orchestration, “siphoning capabilities” from the latest Claude model.
- At one point, redirected almost half its traffic to gather responses from Claude when a new version launched.
4. The Move Toward Open Source AI Models
- The hosts see these developments accelerating the trend toward smaller, open-source AI models that can be run locally.
- These models democratize access and dramatically lower costs for solo entrepreneurs and small startups.
- Real-world example: Host replaced expensive ElevenLabs subscription for voice cloning with an open-source model (Qwen 3 TTS by Alibaba) running locally, saving over $1,300 a month.
- Notable Quote:
"Then I found that there’s a model called Qwen 3 TTS that just came out. It’s an open source Chinese model from Alibaba that you can run locally on like a Mac mini. It can clone your voice with three seconds of audio. It sounds incredible. The quality is amazing." (Host 2, 08:22)
- Implication: If all major models can be replicated this way, the dominance (and revenue) of closed-source giants like OpenAI and Anthropic is at risk.
5. Risks and Opportunities for Entrepreneurs
- Local/open AI models are a huge boon for indie builders—reducing recurring costs, offering more privacy, and avoiding big-tech censorship.
- However, large enterprise customers may still value centralized, maintained models and affiliated support.
6. A Dose of AI Industry Irony and Internet Reactions
- The hosts close with a humorous, biting post satirizing Anthropic’s public posture and perceived hypocrisy:
"Be me, name company Anthropic, literally Greek for human centered. Hire a bunch of doomers who secretly think humanity is the disease. Raise billions from big tech to build the world’s most anxious, heavily censored chatbot. Write a 50 page constitutional AI manifesto so it can lecture users about microaggressions. Realize open source developers are building better models for free. Dario starts crying to the government that AI is unmanageable power and open source is going down very dangerous path. Translation, please regulate our competitors out of existence so we can protect our $380 billion closed source monopoly." (Host 2, 16:00)
- Even Elon Musk chimed in on X with “accurate,” highlighting the PR struggle Anthropic faces.
Timestamps for Key Segments
- 01:24 — Introduction of the Anthropic vs. China AI Labs controversy
- 03:00 — Explanation of model distillation and the scale of Chinese labs' activities
- 06:33 — Discussion on industry norms for copying/distillation and whether this is truly abnormal
- 07:07 — Historical context: Similar accusations among US firms; open-source implications
- 08:00 — Importance of cheap, local AI models for entrepreneurs
- 10:56 — Options for small players vs. big corporations with AI deployment
- 11:40 — Detailed breakdown of mining tactics by Deepseek, Moonshot, and Minimax
- 16:00 — Reading and analysis of satirical industry commentary from X
- 17:07 — Wrap-up and call to listeners to harness these AI trends for their own hustles
Tone & Style Highlights
- Conversational and irreverent: “I always love to see the AI drama when this happens.” (Host 2)
- Critical yet pragmatic: “I don’t really know whose side I’m on. … It’s pretty funny.”
- Entrepreneur-focused: “You can still use these tools to make money, which is what we’re all about here at AI Hustle.” (Host 1, 17:07)
- Nuanced & self-aware: Both hosts acknowledge the hypocrisy in the AI industry and the inevitability of copying in current model-centric AI development.
Memorable Moments & Quotes
- On challenges to IP in AI:
"If deepsea can do this, anyone can do this and if everyone’s like, well, anthropic has the best tone and their models sounds the best, blah blah blah, okay, well if you could just do a model distillation method, you just make a bunch of fake accounts. Like you can kind of clone them too, you know?" (Host 2, 05:40)
- On open source as industry disruptor:
"And I think that this same thing could happen with text models, image models, and a lot of other things where there’s all sorts of creative endeavors, interesting projects, things people could test out, experiment with and try, but it’s just too expensive sometimes for all of these models. So like if you ran it locally just on your computer, that’d be amazing." (Host 2, 08:37)
- On the future of AI model economics:
"Making it a little more democratized, if you will, and giving people options is always a good thing. You know, supply and demand." (Host 1, 10:56)
Conclusion
The episode offers a lively, in-depth examination of the latest AI industry skirmish between US and Chinese labs, the evolving norms (and ironies) of model training practices, and the exciting outlook for entrepreneurs leveraging rapidly advancing open-source AI tools. The hosts’ energetic, skeptical tone injects humor and realism into what is both a highly technical and deeply relevant topic for anyone hustling in the AI space.
