The 404 Media Podcast
Episode: "The Final Boss of AI Slop"
Date: October 8, 2025
Hosts: Joseph, Sam Cole, Jason Kebler
Missing: Emanuel
Overview
In this lively episode of The 404 Media Podcast, hosts Joseph, Sam, and Jason discuss the rapid evolution and chaos of OpenAI’s “Sora 2” video app — dubbed a “final boss of AI slop.” The team explores Sora's impact on copyright, AI voice and video cloning, and its memetic explosion of IP-infringing content. They also dissect the fast spread of watermark removers for Sora-generated videos and close with an in-depth discussion about the ICEblock and Red Dot apps being removed from both Apple and Google app stores amidst political pressure and changing definitions of “vulnerable groups.” The podcast combines original reporting, sharp commentary, and signature wry humor on today's tech-media absurdities.
Episode Sections & Key Insights
1. 404 Media Updates & Merch Refresh (00:40–04:44)
- Merch Restock Announcement:
Jason gives a quick breakdown of 404 Media’s new merch lineup, including Doom-themed shirts, crewnecks, hats, and long-sleeves, highlighting supply chain issues and tariff impacts.- “The underlying T-shirts... have gone up in price pretty significantly over the last just like month and a half. We're talking like five to seven dollars per item just for the base T-shirt at wholesale.” — Jason (02:26)
2. OpenAI Sora 2 — “Copyright Infringement Machine” (05:22–25:14)
What is Sora 2? (05:22–08:46)
- OpenAI has launched Sora 2: a TikTok-like app that generates AI deepfake-style videos and audio, using only a selfie and a few spoken numbers.
- The app’s point of entry is collecting your face and brief audio for “shockingly good” video and voice cloning, making deepfakes more accessible than ever:
- “Now to make pretty convincing video with synced audio that sounds like your voice, it's like a 17 second process.” — Jason (08:06)
App Experience & “AI Slop” (08:46–12:52)
- Massive leap from complex, time-consuming deepfake technologies to instant and highly realistic content creation.
- The app also lets users combine themselves with others for collaborative deepfake videos, similar to the “Cameo” concept.
- Concerns raised about how easily videos can be pulled off Sora and spread elsewhere, creating viral disinformation:
- “You can just, like, generate the video, download it, and post it elsewhere. And, like, that's probably advantageous for the people doing this because the other platforms are monetized.” — Jason (14:01)
IP Infringement & Viral Content (14:25–22:59)
- Early use of the app is dominated by instantly generated videos featuring copyrighted characters (Spongebob, Pikachu, Simpsons, Mickey Mouse, etc.) in wild — often NSFW or inappropriate — scenarios.
- “I actually saw a video of Sam Altman grilling a dead Pikachu on a charcoal grill... him going like, and like, cutting into Pikachu and being like, ‘I'm gonna get sued for this.’” — Jason (15:04)
- Hosts highlight the legal absurdity: OpenAI, a huge company, enables mass copyright infringement while companies like Nintendo or Disney have historically targeted individual fans for far less.
- “Then to see just like this, anything goes... is kind of insane to see how far we've come.” — Jason (17:26)
- The opt-out system for copyright holders (per character, not per company) is described as a weak, reactive measure and the opposite of how copyright law works.
The Lockdown: Companies Respond (20:37–24:01)
- Within a week, many IP holders have begun removing their characters, deflating much of the viral fun (“Sora is no fun anymore because everything I try to do is a content violation.”).
- Meta commentary emerges as users generate videos of Sam Altman complaining about fun being regulated out of Sora.
- “Sora is so good at generating AI video that you can just, like, generate the video, download it, and post it elsewhere.” — Jason (14:01)
Sora’s Fate and AI Slop’s Future (22:59–25:14)
- Panel questions Sora’s longevity now that most recognizable IP is blocked.
- Consensus: Sora will likely remain a “slop generator” for generic AI content and will migrate viral slop off its own platform for reach and monetization elsewhere.
3. Watermark Removers: The Futile Fight for Attribution (25:14–27:56)
- Sora 2 watermarks its videos with a spinning icon, but within 24 hours, a cottage industry of tools appears to remove these marks, allowing videos to blend with “real” content across platforms.
- “Watermarks are just such a flimsy indicator of AI content in general... It’s very predictable that people would try to take it off of Sora videos.” — Sam (26:12)
- Sam notes watermarks are easily removed and also spoof-able: both weak as authentication or warning.
- Hot take: Only invisible (hashed/thumbprint) watermarks remain meaningful, but these aren’t used by Sora in a meaningful way.
4. ICEblock and Red Dot App Removals: Tech Companies & Political Pressure (34:01–48:35)
What Are ICEblock and Red Dot? (34:01–36:09)
- Crowd-sourcing apps for reporting ICE official sightings, used by activists to warn local communities.
- ICEblock drew massive government ire after CNN covered it; became a target of the Trump administration.
Apple and Google Crackdowns (37:18–45:16)
- Apple removed ICEblock in direct response to political pressure, citing "objectionable content" and vague references to targeting groups (39:29).
- Jason notes the hypocrisy compared to conservative outrage over much less government pressure on social platforms under Biden:
- “This is like literally the exact same thing, arguably even worse... they're saying, hey, delete this app or else essentially. I mean, I think that if the DOJ comes to you and says like, delete this app, the or else is implied.” — Jason (40:40)
- Google removed Red Dot (another ICE app) soon after, referencing new “violent acts” — specifically, a shooting at an ICE facility — and labeling “ICE agents” as a “vulnerable group.”
- “Google said it removed apps that share the location of what Google described as a vulnerable group after a recent violent act... which is like, wow, that's a lot of words to say... you consider ICE officials a vulnerable group.” — Joseph (43:00)
Redefining “Vulnerable Groups” (45:16–48:35)
-
Jason highlights how content moderation language intended for marginalized people is now co-opted to protect law enforcement at the expense of actual vulnerable communities:
- “In no world was like police officers, law enforcement, like a vulnerable group when these rules were written... now Google, like, parroting that language... retrofitting them to come up with a pretext to ban this app.” — Jason (45:16)
-
Panel notes real life consequences, including communities left in the dark, and tech platforms placating political power rather than upholding speech protections.
- Quote from Fire App (another ICE reporting tool):
“This raises serious questions about fairness and transparency. This action seems to be based more so on fear of retaliation and retribution from the Trump administration. And in line of kissing the metaphorical ring, something we unfortunately have seen many top executives do in order to placate Trump on their side.” — Fire App statement (48:35)
- Quote from Fire App (another ICE reporting tool):
Notable & Memorable Quotes
-
On Sora’s Rapid Deepfake Tech:
"To make pretty convincing video with synced audio that sounds like your voice. It's like a 17 second process." — Jason (08:06) -
On IP Infringement Disparity:
“Then to see just like this, anything goes, extremely blatant copyright infringement by a company that's worth billions... is kind of insane.” — Jason (17:26) -
On Companies Opting-Out Per Character:
“Nintendo's gonna have to send a thousand letters and open a thousand different cases about Pokémon...” — Joseph (19:59) -
On Protections for Law Enforcement:
“In no world was like police officers, law enforcement, like a vulnerable group when these rules were written.” — Jason (45:16) -
On Watermarks and Detection:
“Watermarks are just such a flimsy indicator of AI content in general... There are types of watermarking that... are much harder to remove. You have to edit the video to actually be able to see it.” — Sam (26:12)
Timestamps
- Merch update & tariffs: 00:40–04:44
- OpenAI Sora 2 intro/tech deepdive: 05:22–12:52
- Sora’s copyright chaos: 14:25–19:59
- Company opt-outs & aftermath: 20:37–24:01
- Watermark removers & authenticity: 25:14–27:56
- ICEblock/Red Dot app removals: 34:01–48:35
Tone & Takeaways
The conversation is sharp, unsparing, and laced with the 404 crew’s characteristic dry humor, frustration, and worry about technological progress outpacing legal and ethical guardrails. The hosts bring their investigative and reporting chops to bear, contextualizing the rapid developments and highlighting the absurdity of both copyright law’s application to AI and the ongoing political weaponization of tech platforms.
For anyone who missed the episode:
You’ll walk away understanding the technological and political stakes of OpenAI’s Sora, the dizzying speed of AI video tech, and the increasingly fraught politics of moderation as tech giants clash with state power — and sometimes, with their own rules.
