The Think Media Podcast, Ep. 493: These AI Mistakes Could Destroy Your Channel (Attorney Explains the Legal Risks)
Host: Sean Cannell (A)
Guest: Autumn Witt Boyd—attorney for content creators and online business owners (B)
Date: March 5, 2026
Episode Overview
This episode dives deep into the rapidly changing legal landscape for content creators using AI—highlighting critical legal mistakes around copyright, data privacy, and team management. Featuring attorney Autumn Witt Boyd, it offers actionable advice, cautionary tales, and a checklist to keep your online video business protected and profitable.
Key Discussion Points & Insights
1. Biggest Legal Mistake with AI Right Now
- Overreliance on AI as fact ([01:21])
- “They are relying on it as if it's Google and it's giving them the right answer… but if a law has changed, if you're throwing a new scenario at it that it is not in its library, it is just going to make things up, it's going to hallucinate.” — Autumn ([01:21])
- Practical Insight: AI generated answers, especially legal or compliance-based, can be outdated, wrong, or made up (“hallucinations”).
2. Ownership and Copyright Issues
- Do you own AI-generated scripts and content? ([02:48])
- "If you are just typing in a prompt and taking what the AI gives you and using it as a script for your video… that is not protected by copyright law." — Autumn ([02:48])
- Purely AI-generated work lacks copyright protection. Adding your “human touch” (input, editing, creativity) is crucial for ownership.
- On evolving law: “There's a case that's working its way up to the Supreme Court that's challenging that. So stay tuned. That may change.” — Autumn ([02:50])
- Music & Iterative Prompting ([04:37])
- AI music platforms like Suno are creating fully AI music, but courts say even prompt creativity doesn't currently grant copyright protection.
3. Monetization and Platform Risk
- Can you monetize AI-generated content? ([06:09])
- “Can I use it? Is going to depend on YouTube's terms of service. ...And that probably will continue to evolve.” — Autumn ([06:09])
- YouTube and other platforms are actively demonetizing “generic AI slop” and repetitive content.
- The “risk” is inadvertently copying others—AI tools generate outputs based on pre-existing works and could infringe trademarks, logos, or copyrighted work.
4. Accidental Infringement and Being Copied
- What if AI copies someone else, or someone AI-copies you?
- “You could accidentally create copy someone else’s work because again, AI is pulling from its library…” — Autumn ([00:55])
- For creators whose work is copied: options include using AI-powered bots to crawl for infringement, sending takedown notices, or lawsuits—but “it’s like whack-a-mole.” ([08:56])
- “Most of them are making no impact at all on your sales... No one’s confused about whether it's Sean's channel or Joe Bob's.” — Autumn ([09:41])
5. Managing Legal Anxiety and Scaling Risks
- “How much do we need to worry about legal as kind of a spectrum depending on how much risk there is in your business.” — Autumn ([10:55])
- Early-stage creators: focus more on building; legal exposure is smaller.
- As you grow—audience, revenue, team—increase your legal protection and resource allocation.
6. Major Copyright Lawsuits & Content Training
- Landmark Cases ([12:08])
- The New York Times v. OpenAI/Microsoft: Times claims its paywalled content was used to train AI, a potential copyright violation.
- "The New York Times wouldn't have licensed it. So they kind of decided to ask for forgiveness rather than permission." — Autumn ([13:37])
- Similar lawsuits from book authors.
- The outcome will set foundational precedents for AI training and content use.
Notable Quotes & Moments
- “Using a picture in a thumbnail is usually going to be infringement.” — Autumn ([15:23])
- “If you are relying on fair use, you are going to get sued maybe, and then you have to go to court and say, no, it fits in this category, so it's okay.” — Autumn ([15:56])
- “Are you transforming the original work into something new? …That has become a really important thing that people are looking at.” ([18:14])
- “If everyone else is doing it, that's probably a good indication that maybe the copyright holder… doesn't mind. Now, you can still get burned...” — Autumn ([19:06])
Important Segments & Timestamps
| Segment | Description | Timestamp |
|------------------------------|----------------------------------------------------------------------|---------------|
| Opening & Guest Intro | Context on AI use/copyright | 00:00–01:21 |
| Legal risks of AI-as-Google | Why generative AI can hallucinate | 01:21–02:27 |
| Copyright on AI content | US law status, “human touch,” emerging Supreme Court case | 02:27–05:15 |
| Monetization & Platform Risk | YouTube rules, demonetizations, accidental infringement | 06:09–07:13 |
| Risk of being copied | Copyright infringement, DMCA takedowns, practical realities | 08:13–10:07 |
| Anxiety, scale, & risk | Decision-making frameworks, business stage considerations | 10:55–12:08 |
| Landmark lawsuits | NYT v OpenAI, content scraping, future of copyright in AI | 12:08–14:29 |
| Thumbnails, fair use, risk | Use of famous faces, IP, parody/critique, “petals” of fair use | 15:23–19:46 |
| Data privacy & client data | Client consent, risk with free AI tools, laws like GDPR/CCPA | 20:08–23:58 |
| Platform security, policies | Tool settings, client communication, privacy practice checklist | 25:14–27:17 |
| Team risk & contractor use | Oversight, contractor issues, proprietary info, AI policy | 27:21–29:44 |
| Group programs & AI bots | Zoom calls, note-taker bots, confidentiality, kicking out bots | 29:44–30:49 |
| The “Human in the Middle” Rule | Use AI to brainstorm/finish, but human creativity is key | 33:44–35:09 |
| Fast fire Q&A | AI for voiceovers, thumbnails, cease and desist, trademarks, hiring | 35:09–37:41 |
| Final checklist/action items | Review contracts, check AI tool privacy, audit team AI usage | 38:16–39:40 |
Actionable Checklist: What Should Creators Do Next? ([38:16])
- Review (and update) your contracts
- Add language disclosing AI use if you work with clients/coaching ([38:16])
- Make sure contracts reflect how you actually do business
- Check your AI tool privacy settings
- See what data is shared, and update as new options become available
- Communicate with your team (if you have one)
- Ask what tools they're using
- Decide on a reasonable, practical AI policy for usage and security
Best Practices for AI and Legal Safety
- Use AI for brainstorming, outlines, and final proofing—never publish AI-only scripts
- Keep “human in the middle” for unique value and copyright protection ([33:44])
- Don’t upload sensitive client info to free/public AI tools without consent
- If you have a team, make sure you know what AI tools they’re using and set policy
- For thumbnails or trending content: get a license if possible, or use only for clear commentary/parody, being mindful of risk
Resources & Links
- Fine Print Academy: For creators/businesses to lock down contracts and legal protections.
- AWB Firm: Autumn’s law practice—@AWBFirm everywhere ([40:23])
- Legal tools mentioned: Lexis, Perplexity, Fathom
Language & Tone
Throughout the episode, both Sean and Autumn are pragmatic yet friendly, blending lawyerly caution (“That’s copyright infringement. Full stop.”) with encouragement to “keep creating” and not get paralyzed by fear.
Memorable Soundbites
- “Ideally, I don't think we want to be reading a script verbatim so that you got the human at the beginning and... the end, but that human in the middle is where, like, all the magic happens.”
- “AI isn't illegal, but using it blindly can be dangerous.” — Sean ([39:40])
Conclusion
AI offers amazing tools for content creation, but the legal landscape is changing rapidly—most creators need to be more careful about copyright, data privacy, and what their teams are doing with AI. The “human in the middle” rule is a practical framework: let AI help, but keep creative control and oversight. Review your contracts, monitor your tools, and communicate clearly to keep your channel (and business) safe and scalable.
