Podcast Summary
Podcast: Future of Life Institute Podcast
Episode: Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)
Date: June 20, 2025
Host: Gus Ducker (B)
Guest: Ed Newton-Rex (A)
Overview
This episode features Ed Newton-Rex, a classical composer and long-time AI entrepreneur, discussing one of the most heated debates in AI: whether companies building AI systems are, or should be, respecting creators' copyrights. Having resigned from Stability AI over copyright issues, Ed provides an insider’s perspective on how industry norms have evolved and why creator rights are under threat. The episode covers Ed’s background, the evolution of AI-generated music, the legal and ethical landscape, his new initiative (Fairly Trained), and the broader implications for the future of creative industries and culture.
Key Discussion Points & Insights
1. Ed Newton-Rex's Background and Early Work in AI Music
- Ed's background: Classical composer; started Dukedeck (AI music startup) in 2010—"We were probably about 12 years too early to the generative AI trend." [00:58–02:23]
- Tech evolution: Early efforts relied on rule-based systems and symbolic AI; now, models generate raw audio samples, allowing for much more variety and convincing outputs. [02:28–04:18]
- AI-generated music quality: Today’s AI music can be indistinguishable from human-composed pop music, but classical music remains a challenge for AI. [04:39–06:05]
- "Pop music, including with vocals…you can generate really convincing stuff now, which is already out there in the market competing with human musicians, which I think is a big problem." – Ed [04:39]
2. Impact of AI on Musicians and the Creative Economy
- Why haven’t AIs replaced all musicians? We’re still early in public awareness and reporting; displacement is already happening behind the scenes, especially in unknown or “long tail” musician roles. [06:05–09:28]
- "Taylor Swift will emerge from the AI age relatively unscathed. But the problem is most people are not Taylor Swift." – Ed [06:21]
- Invisible job losses: AI-generated music is already replacing background music in retail and similar spaces worldwide [06:21–08:51]
- Human connection: Top artists survive due to the unique value of live performance and human relationship—but the vast majority of working musicians are at risk as "hidden" industry jobs are automated away. [09:33–12:15]
3. Resignation from Stability AI and Copyright Controversies
- The trigger: Ed resigned from Stability AI (October 2023) after the company, like others, told the US Copyright Office it considered unlicensed use of copyrighted work for AI training to be “fair use.” [12:22–16:20]
- "It said, we think that training on people's copyrighted work without a license is fair use. And that just goes against everything I stand for." – Ed [00:00, 12:22]
- Industry norm shift: In the 2010s, everyone agreed commercial use required licensing. The precedent changed after big AI companies released research models without licensing, sparking a “gold rush” of unlicensed data use. [16:20–21:03]
- "Everyone copied them. ... Immediately it became the standard approach. And it has massive issues for people." – Ed [16:33]
4. Systemic Issues and the ‘Fair Use’ Defense
- Cycle of unfair competition: Licensing AI is more expensive, so companies doing things “right” struggle to raise money and compete—investors prefer companies with free, unlicensed data. [16:33–21:03]
- Open-source dilemma: Many so-called "open source" models obscure their training data to avoid legal risks; openness alone does not solve or mitigate copyright abuse, and risks are potentially irreversible if the models proliferate. [21:44–26:22]
- "Truly open models ... there is no downstream limitation to how the model can be used. ... Open models are irreversible." – Ed [21:44]
5. The ‘Fairly Trained’ Certification Initiative
- Purpose: To provide certification for AI models NOT trained on copyrighted works without a license; aims to demonstrate viable, ethical AI development is possible. [26:22–31:10]
- "We should highlight ... that there are these companies ... you can go and use models that are built fairly." – Ed [26:35]
- Certification Process: Relies on documentation and trust; companies submit lists of training data, which are checked for licensing, though true technical verification remains difficult. [31:10–32:56]
- "There is at the moment no way of actually scanning ... and just reverse engineering. ... So we have to have some trust based mechanism." – Ed [31:18]
6. Transparency, Scaling, and Regulatory Challenges
- Barriers to certification at scale: Major players (OpenAI, Google, Anthropic) don’t disclose training data; transparency is the obstacle, not auditing capability. [33:06–37:16]
- Legislation: Proposals in the UK to require disclosure of training data sources have been blocked, largely due to government ties to Big Tech rather than protecting trade secrets. [33:06–37:16]
- Synthetic data issues: Laws requiring transparency must also account for synthetic data (AI-generated data based on copyrighted works), as this can be a form of “laundering copyright.” [37:54–41:24]
- "Synthetic data itself ... can be a way of laundering copyright." – Ed [37:54]
7. The Problem of Enforcement and Model “Escape”
- Detecting hidden infringement: Suggests audits and red-teaming (testing for traces of copyrighted works in outputs) can help but acknowledges these methods aren’t perfect. [41:24–42:32]
- Is it too late? Despite proliferation of open-source models, enforcement—even partial—can still curb much of the damage ("Most people don't want to break the law."). [42:59–45:51]
- "You can’t take them back, but you can forbid people from using them.… It’s not going to stop all use, but honestly, it’s going to stop a lot of the use." – Ed [42:59]
8. Wider Reflections: Future of Work, Culture, and Humanism
- Potential for mass labor displacement: Creative work is the “canary in the coal mine”; companies are openly aiming to automate all labor, with funding and talent aligning in this direction. [51:02–56:10]
- "There is a real possibility that we can automate, if not all work ... maybe a huge amount of work." – Ed [53:03]
- Difference from past technological changes: General-purpose AI is different because it’s designed to be broadly applicable, raising unique social risks. [56:42–59:40]
- Political and ethical worries: Many AI leaders appear "willing to trample on people's rights" in pursuit of profit; redistribution and respect for creators’ rights are not priorities. [56:42–59:40]
9. Future of Culture and Authenticity
- Cultural effects: Much creative “entry-level” work is vanishing; remix culture and personalization will increase, but Ed predicts that fixed (“concrete”) works—songs, books—will maintain cultural primacy due to shared experience. [60:06–63:11]
- Recommendation engine effects: The structure of recommendation systems shapes what people hear, usually narrowing diversity rather than broadening it. [66:45–69:23]
- Backlash and the birth of new humanism: Ed anticipates a growing movement in artistic communities emphasizing “authentic” human creation—a new kind of humanism, potentially favoring live, acoustic performance and direct human creativity. [69:49–76:00]
- "I think people in tech still underestimate...the huge strength of feeling against generative AI. … I think what that's going to lead to is what I kind of think of as like a new kind of humanist movement in the arts." – Ed [69:49]
10. Competitive Pressures, Politics, and the Future
- Efficiency ≠ Creativity: Unlike other industries, efficiency is not the core value in creativity; listeners do not care whether music was made quickly or cheaply. [79:25–81:56]
- Soft power and culture: Political fears about AI competition tend to focus on AGI, not creative industries; gutting creative industries for AI advantages is seen as a strategic mistake. [81:56–84:07]
- Top priority: Ed's ongoing work is to shift public and legislative opinion to ensure fairer outcomes for creators. Fairly Trained and similar efforts are about showing that ethical AI is feasible and advocating for broader change. [84:18–87:10]
- "I think what [creators] face right now is like an existential threat to their industries, to people's ability to make money from being creative and therefore to the art that we all consume." – Ed [84:18]
Notable Quotes
-
On the standard industry approach:
"Yeah, that's absolutely the standard industry attitude right now. ... No one trained on copyrighted work without a license for a very long time. ... Then two or three companies ... thought, let’s just release this, let’s see what happened. ... Everyone copied them." – Ed [16:33] -
On open-source AI models:
"Truly open models ... there is no downstream limitation to how the model can be used. ... open models are irreversible. You can't take [them] back." – Ed [21:44] -
On creators’ existential threat:
"The creative industries ... face right now is like an existential threat to their industries, to people's ability to make money from being creative and therefore to the art that we all consume." – Ed [84:18] -
On humanism and the reaction to AI art:
"I think what that's going to lead to is ... a new kind of humanist movement in the arts ... a movement towards the authentic and towards the natural and ultimately towards the human." – Ed [69:49–75:48]
Timestamps for Important Segments
- Ed’s AI and music background: [00:58–04:18]
- How AI music threatens most musicians (except a few stars): [06:05–12:15]
- Why Ed resigned from Stability AI over copyright: [12:15–16:20]
- How “fair use” became the industry default: [16:20–21:03]
- Open-source AI and copyright risks: [21:44–26:22]
- Fairly Trained’s certification process: [26:35–32:56]
- Regulatory/legislative battles for transparency: [33:06–41:24]
- Synthetic data as “laundered” copyright: [37:54–41:24]
- Is it too late? Enforcement & model collapse: [42:59–46:25]
- Mass labor displacement concerns: [51:02–56:10]
- Difference from past tech revolutions: [56:42–59:40]
- Rise of a new “humanist” art movement: [69:49–76:00]
- The case for prioritizing creators’ rights: [84:18–87:10]
Overall Tone and Style
Ed is passionate, principled, and pragmatic. He’s deeply concerned about the direction of AI and creative rights, but maintains optimism that advocacy, legal reform, and ethical entrepreneurship can help shape a better future. The conversation is frank about industry pressures, yet appreciates both technological potential and cultural risks.
For more in-depth resources on policy, advocacy, or the Fairly Trained certification, see Ed’s organization or related literature on creators’ rights and AI transparency.
