The Joe Rogan Experience Fan
Episode: TikTok Updates Safety Tools With AI Opt-Out
Date: November 21, 2025
Host: The Joe Rogan Experience of AI
Episode Overview
In this episode, the host explores TikTok’s latest safety and content management updates—specifically, new tools that let users control how much AI-generated content appears in their feed. Drawing on trends Joe Rogan covers (like AI and tech), the show analyzes TikTok's strategy, the rapid evolution of synthetic media across social platforms, the challenges of distinguishing real from AI content, and why labeling and education are now key to digital literacy.
Key Discussion Points & Insights
1. The Rise of AI-Generated Content on Social Media
- Background: TikTok, once driven entirely by user-generated content, now faces a growing influx of AI-generated material—also called AIGC (Artificial Intelligence Generated Content).
- Industry trend: Not only TikTok, but platforms like Meta (with the Vibes app), OpenAI (Sora), and others are embracing AI-generated media.
- “[Gone] are the days of these really polished videos. TikTok really pushed that forward—all of a sudden we have AI generated content that is taking over the for you page.” (03:02)
- Impact: There is a divide between people who celebrate this creative evolution and those who worry about feeds turning into “AI slop generators.”
- “[Some people] feel like social media feeds are going to become AI slop generators and some people think it is a way to express your creativity... I've heard both... A lot of people complain.” (01:44)
2. TikTok’s New AIGC Controls
- Manage Topics Tool: Users can now adjust the amount of AI-generated content they see via a slider in their Content Preferences.
- “This is what they said about it. They said Manage Topic already enables people to adjust how often they see content related to over 10 categories like dance, sports, food and drinks... the AIGC setting is intended to help people tailor the diverse range of content in their feed rather than removing or replacing content in feeds entirely.” (04:02)
- Personalization: The host praises the feature for empowering users to “hard code” their content preferences—regardless of historical engagement.
- “What I do like is that they're actually giving you the ability to hard code this regardless of what you've engaged with in the past, and just shut down certain categories and one of those will be AI generated.” (05:33)
- Example: The host jokes about wanting to filter out dance videos but see more content on tech and AI. (04:45)
- Implementation Timeline: The feature is rolling out over the next few weeks. (09:01)
3. Algorithmic Challenges & Engagement
- Current State: Until now, TikTok’s algorithm favored content based on engagement (likes, comments—even hatewatching).
- “Whatever you engage with grows.” (05:07)
- “You are now going to see a ton more content from that person, because that is what the algorithm thinks you engage with.” (05:14)
- Problem: Many users complain about unwanted “spam” content, but don’t realize that engagement—even negative—increases that content in their feed. (05:00–05:35)
- Solution: The new granular controls disrupt the engagement feedback loop for more user agency.
4. The Problem of Deepfakes and Distinguishing Reality
- Viral Example: The host recounts seeing a hyperrealistic, absurd AI-generated video of a man, a deer, and a bear in a porta potty—a video so convincing people believed it was real until subtle glitches revealed its synthetic nature.
- “For the first two seconds you just think, oh, my gosh, that’s crazy. And then you’re like, how can a bear jump through that window?... all these weird things and like the toilet paper roll, like, disappears. And then all of a sudden the deer disappears and the guy disappears.” (07:04–07:40)
- Wider Reach: Such snapshots mislead not just on TikTok, but across platforms—LinkedIn, Facebook—often deceiving users, especially those less digitally literate (e.g., the elderly).
- “People have figured out ways to export them off of [Sora] and send them everywhere... Most of them did not realize it was AI generated.” (06:42–07:58)
5. Labeling, Watermarking, and Digital Provenance
- Current Practices: TikTok requires labels for realistic AI-generated content and leverages “Content Credentials” (from C2PA) embedding metadata in media.
- Limitations: Editing or re-uploading can strip metadata and remove the label.
- “You download it, it’s in the metadata, you go open it in Adobe Premiere... and change all of the metadata.” (10:08)
- New Technology—Invisible Watermarking:
- TikTok will add imperceptible watermarks at the pixel level to content made with its AI tools or uploaded with C2PA credentials—making it harder to conceal an AI origin, even if metadata is scrubbed.
- “The watermark is in the video itself, you won’t be able to see it, but there’s little pixels in there that TikTok can detect and they will know.” (11:10)
- Scope: Watermarking will apply to content from TikTok’s AI Editor Pro and externally with C2PA credentials.
6. Education and AI Literacy Initiatives
- New $2 Million AI Literacy Fund: TikTok is funding experts and nonprofits to create content that teaches safe, critical consumption of AI-generated media.
- “They are launching a $2 million AI literacy fund, which is aimed at experts... to create content that teaches people about AI literacy and safety.” (12:25)
- Why it Matters: The host notes the “cat and mouse” dynamic—tech advances to conceal media’s AI origins, but user education remains crucial.
- “You just have to teach people what to look out for and how to know if they can actually trust a video is AI generated or not. So I appreciate their approach on the education side of this as well.” (13:15)
Notable Quotes & Memorable Moments
- On AI Content Controls:
- “I personally think this is a fantastic move if you can actually go and completely ban certain categories.” (04:23)
- On Algorithm Manipulation:
- “Whatever you engage with grows.” (05:07)
- On Deceptive AI Video:
- “For the first two seconds you just think, oh, my gosh, that’s crazy. And then you’re like ... how can a bear jump through that window?” (07:13)
- On Watermarking Efforts:
- “The watermark is in the video itself, you won’t be able to see it, but there’s little pixels in there that TikTok can detect and they will know.” (11:10)
- On the Importance of AI Literacy:
- “At the end of the day it's always a cat and mouse game where people are trying to hide that it's AI generated ... and you just have to teach people what to look out for and how to know if they can actually trust a video is AI generated or not.” (13:10)
Timestamps for Important Segments
- [01:27] — Introduction of TikTok’s new AI content controls
- [04:02] — Explanation of Manage Topics Tool and how users can adjust AIGC levels
- [05:00] — Discussion about TikTok’s algorithm and user engagement traps
- [06:38] — AI-generated viral video example and challenges of distinguishing fake from real
- [09:01] — How to access and use the new AI slider
- [10:08] — Explanation of metadata labeling and its vulnerabilities
- [11:10] — The details and implications of invisible watermarking
- [12:25] — The AI Literacy Fund and TikTok’s educational initiatives
- [13:10] — Concluding remarks on the arms race between AI media generation and consumer discernment
Summary
This episode delivers a detailed, enthusiastic breakdown of TikTok's proactive shift: giving users direct control over AI-generated content, improving labeling with both metadata and invisible watermarks, and investing in AI literacy. The host, channeling the spirit of Rogan’s tech curiosity and skepticism, commends TikTok's approach but notes the ongoing arms race—technological and educational—between AI fakery and human discernment.
