What A Day — “The Politics Of ‘AI Slop’”
Podcast: What A Day (Crooked Media)
Host: Jane Coaston
Guest: Jason Kebler (Co-Founder, 404 Media)
Date: October 6, 2025
Episode Focus: The proliferation of low-quality, misleading, and profit-driven AI-generated content—aka “AI slop”—and its growing impact on news, social media, and US politics.
Overview
This episode examines the increasing presence of “AI slop”—cheaply produced, AI-generated images, videos, and content designed primarily to capture attention and revenue, often with little regard for truth or quality. Host Jane Coaston brings on Jason Kebler, co-founder of 404 Media, for an in-depth discussion about the mechanics and dangers of AI slop, especially in the political sphere, and offers practical advice on how listeners can spot such content.
Key Discussion Points & Insights
1. Defining “AI Slop”
Time: 02:54–03:50
- Jane Coaston and Jason Kebler clarify the term “AI slop.”
- Kebler defines it as “AI generated content that is designed to make money first and foremost… a lot of this stuff is just so bizarre that it's made to have you sit there and go, like, what the hell is this?” (03:03)
- He recounts viral, AI-manipulated images (e.g., chainsaw woodcarver stories) that cycle and morph for maximum reach, often at the expense of genuine creators.
2. Monetization of AI Content
Time: 03:50–05:12
- Platforms incentivize virality: Facebook, Instagram, TikTok, etc., pay creators fractions of ad revenue for popular content, regardless of legitimacy.
- AI slop’s efficiency: Unlike real content that takes time and effort, “You could make an AI video in 10 seconds…You don't need all of them to go viral. You only need one of them to go viral in order to grow your channel and start collecting this money.” — Jason Kebler (04:57)
3. OpenAI’s “Sora” App—Hyperdrive for AI Slop
Time: 05:12–07:53
- How Sora Works: Users train the app on their facial and vocal data; it rapidly generates realistic videos with alarming fidelity.
- Accessibility & Dangers: “It's pretty scary… It syncs voice and images and video really well. That was a really hard thing to do in AI for a while…” (06:12)
- Guardrails Discussion: Limited controls mostly for user privacy or copyright holders; “If you are a famous person, there’s not a lot of guardrails at the moment.” (07:03)
4. AI Slop in Politics—A New Norm
Time: 07:57–10:06
- Politicians and parties from both sides now regularly use or amplify AI-generated meme content (e.g., AI Trump videos, Gavin Newsom using generated visuals).
- “Now it’s everywhere and I feel like all bets are off… the ball is moving so rapidly in this space that it’s hard to imagine a world where we, like, put things back in the box…” — Jason Kebler (09:08)
- Real-world news coverage is rapidly polluted by AI slop—sometimes within minutes of events breaking.
5. Social Media, Meta, and Oversight
Time: 10:06–11:50
- Meta’s tools make it easier to mass-produce AI-generated political ads; their oversight is lax.
- “They are making tools that allow people to make AI imagery. A lot of my Instagram and Facebook feed is AI generated. And a lot of it is politics…” — Jason Kebler (10:57)
- Viral slop can regularize false or misleading narratives (“It starts to shape your worldview, I think.”)
6. How to Spot AI Content—and its Limitations
Time: 11:50–13:37
- Traditional signs (blurry hands, weird reflections) don’t always hold; Sora’s quality is making detection harder.
- “My best advice is to like, not trust most things that you see on social media these days, unless it comes from someone who you personally know from a news source that you actually trust.” — Jason Kebler (13:17)
Notable Quotes & Memorable Moments
-
On AI Slop’s Business Model:
“They have a lot of bites at the algorithmic Apple…You could make an AI video in 10 seconds...You only need one of them to go viral…” — Jason Kebler (04:39–04:57) -
On OpenAI’s Sora:
“It’s pretty scary in my opinion. One thing that’s really scary about it is that it syncs voice and images and video like really well…It’s pretty good at that.” — Jason Kebler (06:12) -
On AI Slop’s Spread in Politics:
“Now it’s everywhere and I feel like all bets are off…The ball is moving so rapidly in this space that it’s hard to imagine a world where we, like, put things back in the box…” — Jason Kebler (09:08) -
On Spotting AI Content:
“I still think that AI video has a bit of a surreal quality to it…but I think it’s getting a lot harder. And I think that we’ve really like passed a Rubicon with Sora…” — Jason Kebler (12:45) -
On Trust and Media Literacy:
“Not trust most things…unless it comes from someone who you personally know from a news source that you actually trust. And I think that that is like kind of where we’re going…you’re going to start to have to trust individual people and institutions versus like the thing that’s viral at that moment.” — Jason Kebler (13:17)
Timestamps for Important Segments
- 00:00–01:58: Cold open with Jane Coaston; context for political AI videos and current events.
- 02:53–03:50: Defining AI Slop and early examples.
- 03:50–05:12: How AI slop is monetized and outpaces genuine content.
- 05:12–07:53: Deep dive on Sora’s technology, features, and potential for harm.
- 07:57–10:06: The normalization of AI slop in politics; examples and consequences.
- 10:06–11:50: Meta/Facebook’s role in facilitating and profiting from AI-generated ads.
- 11:50–13:37: Practical tips for spotting AI imagery and discussion of increasing difficulties.
- 13:43: End of interview.
Tone & Flow
- Jane Coaston maintains a skeptical, incisive, and occasionally irreverent tone (“…makes me want to heave my phone into a fire and move to the woods…”).
- Jason Kebler provides clear, jargon-free explanations and shares concern—but also resignation—about the direction and “sloppiness” of AI content online.
- The episode is brisk, engaging, and informative, balancing specific technical details with broader cultural commentary.
Final Takeaways
- The flood of AI-generated “slop” is transforming online media and political discourse, making it harder to distinguish authentic content from manipulative or merely distracting fakes.
- Major tech platforms are complicit, enabling creation and amplification of AI content for profit while lacking meaningful oversight.
- AI generated content is now so widespread and convincing that skepticism—even of familiar-seeming content—is essential.
- The best defense, per Kebler, is to rely on trusted individuals and reputable news sources instead of viral trends.
“Not trust most things that you see on social media these days, unless it comes from someone who you personally know from a news source that you actually trust.”
— Jason Kebler (13:17)
For listeners: This episode is especially recommended for anyone concerned about digital literacy, tech policy, or the intersection of AI and politics. You’ll come away with a clearer understanding of the mechanics behind modern misinformation and practical steps to stay informed.
