Podcast Summary: On the Media – “In 2025, Big Tech Embraced Fakeness”
Date: December 17, 2025
Host: Brooke Gladstone (WNYC Studios)
Guest: Craig Silverman (Co-founder, Indicator)
Episode Overview
This episode of On the Media delivers an incisive post-mortem on a chaotic year for artificial intelligence and misinformation. Host Brooke Gladstone is joined by media deception expert Craig Silverman to dissect how Big Tech—especially Meta, Google, OpenAI, and others—shifted away from content moderation and fact-checking in 2025, fueling an online ecosystem where “fakeness” blossomed. Together, they examine the interplay of political pressures, business incentives, and technological advancements that allowed AI-powered disinformation and scams to flourish, as well as the societal implications and possible futures for the media environment.
Key Discussion Points and Insights
1. The Great Moderation Rollback (00:30–05:21)
-
Trigger Event: The re-election of Donald Trump was cited as the catalyst for tech platforms to drastically reduce content moderation and fact-checking efforts.
- Craig Silverman: "The trigger for a lot of this rollback and for Mark Zuckerberg doing what some people have referred to as a hostage video... was, of course, Donald Trump gets reelected." (01:32)
-
Meta’s Policy Shift: Meta announced plans to get rid of professional fact-checkers and replace them with 'Community Notes', similar to those on X (formerly Twitter).
- Craig Silverman: "We're going to get rid of fact checkers and replace them with community notes similar to X. Starting in the US..." (01:36)
-
Industry-Wide Effect: Other tech giants (notably Google) also distanced themselves from fact-checking, such as Google's choice to stop data void warnings and resist integrating moderation tools.
- "In January, Google told the European Commission it wouldn’t integrate fact checking…” (02:56)
-
Reframing Fact-Checking: Content moderation and fact-checking were reframed as censorship, especially by right-leaning groups and politicians.
- Craig Silverman: "The reframing of fact checking and speech... as censorship was a pretty good judo trick." (04:16)
2. AI-Powered Fakes and Monetization (05:21–08:46)
-
Tech’s Financial Incentives: Platforms now actively incentivize and reward the production of viral AI-generated slop (misinformation, impersonations, hoaxes)—sometimes directly paying creators for engagement.
- Example: A Maine-based hoax creator, previously living off ad revenues, now gets paid by Meta for engagement on his false content. (06:07)
-
Mainstreaming Impersonation: OpenAI and Google relaxed prohibitions on photorealistic impersonations, facilitating deepfakes and scams.
- Craig Silverman: "OpenAI makes the change in March... now impersonation is in." (07:24)
-
Consequences: These impersonations fuel everything from individual scams (people losing life savings to impersonator ads) to corporate fraud.
- Craig Silverman: "People are cloning the voices of executives... and getting them to transfer money." (08:25)
3. Erosion and Replacement of Fact-Checking (09:26–11:55)
- Platform Approaches: While TikTok and Bluesky modestly improved verification systems, major companies largely substituted professional moderation with crowdsourced or automated alternatives.
- Community Notes as Facade: Community Notes, touted as participatory, have been under-resourced and are minimally effective—serving more as “a fig leaf than an actual real good faith effort.” (11:24)
4. “Slop” and the New Agents of Deception (11:55–16:54)
- Types of Bad Actors:
- State Propagandists: Countries like Russia harness AI to mass-produce tailored fakes and propaganda (e.g., videos about Zelenskyy’s imagined properties).
- Hustlers/Marketers: Entrepreneurs abuse AI-enhanced tools and “study hacks” to generate deceptive advertising and content, often in violation of FTC rules on disclosures.
- Memorable Moment: Silverman describes Andreessen Horowitz funding bot-farm ad companies, potentially enabling fraud even against their own portfolio. (16:00)
5. The Golden Age of ‘Diddy Slop’ and Viral Disinformation (16:54–19:32)
-
YouTube’s AI Slop Wave: Dozens of channels pumped out AI-generated videos about the P Diddy trial, using unrelated celebrities and clickbait headlines to amass millions of views. (16:54)
-
Host Personal Anecdote: Brooke Gladstone realizes she too fell for a likely fake video about Prince and the Diddy trial—highlighting universal vulnerability to well-crafted slop.
- Brooke Gladstone: “I totally believed and now I wonder whether it’s completely wrong…” (18:06)
-
Universal Susceptibility:
- Craig Silverman: “It is not an intelligence thing… Any of us at any time can be persuaded of something.” (18:46)
6. The Skepticism Trap and Media Literacy (19:32–20:51)
-
From Healthy Doubt to Post-Truth: Too much skepticism erodes trust not only in fake news, but in true reports—potentially fueling conspiracy theories.
- Silverman: “Simply to say… don’t believe everything you read, like, that’s not actionable advice. We actually do need... steps you take.” (19:53)
-
Loss of Moderation Data: The end of ‘data void’ warnings on Google means misinformers rush to fill those spaces.
7. Algorithmic Incentives and Shamelessness (20:51–23:20)
-
Scammers and Tech Incentivization: Summary of the ecosystem where scammy, monetized content overwhelms curated, truthful information—enabled by reduced moderation.
- Notable Quote: “The business imperative... is to show that you are a leader in the creation of AI tools.” (24:07)
-
Notable Rogan Example: Joe Rogan falls for a deepfake of Tim Waltz, only to “explain it away” when corrected; shame fades as a check on spreading misinformation.
- Brooke Gladstone: “Truthiness lives just as Colbert discussed... if it feels true, it is true.” (22:04)
8. International and Financial Dynamics (23:20–29:27)
-
Globalization of US Polarization: Foreign-run pages increasingly manufacture US culture war hoaxes for profit, often escaping the weakened enforcement of platform policies.
-
Economic Realities and Bubble Talk: Despite tech’s AI fervor, business adoption has slowed; big tech can absorb failed bets, but a broader bust may loom for the AI startup surge.
- Brooke Gladstone reads from The Economist: “The employment weighted share of Americans using AI at work has fallen... at 11%.” (24:58)
- Craig Silverman: “If they don't close that gap... a lot of this is going to have just supercharged scams and fraud and impersonation and slop.” (26:13)
9. Individual Agency and Hopeful Action (29:27–30:48)
- Attention as Leverage: The episode closes with a reflection on the power of individual attention in influencing algorithms and economic incentives on platforms.
- Craig Silverman: “We individually are the atomic units that these big tech platforms need… So being conscious... thinking about what you reward with your attention, it’s valuable.” (29:34)
Notable Quotes & Memorable Moments
| Timestamp | Speaker | Quote/Description | |-----------|----------------------|-----------------------------------------------------------------------------------------------| | 01:32 | Craig Silverman | "The trigger for a lot of this rollback... was, of course, Donald Trump gets reelected." | | 04:16 | Craig Silverman | "The reframing of fact checking and speech... as censorship was a pretty good judo trick." | | 06:07 | Craig Silverman | "He is getting paid by Meta now every single month for how viral his hoaxes are." | | 07:24 | Craig Silverman | "OpenAI makes the change in March where these photorealistic things representing real people..."| | 11:24 | Craig Silverman | "As a model, [Community Notes is] turning into more of a fig leaf than an actual real good faith effort."| | 16:00 | Craig Silverman | "Andreessen Horowitz... invested a million dollars in a company that is building bot farms..." | | 18:46 | Craig Silverman | "It is not an intelligence thing... Any of us at any time can be persuaded of something." | | 22:04 | Brooke Gladstone | "Truthiness lives just as Colbert discussed... if it feels true, it is true." | | 24:07 | Craig Silverman | "The business imperative now for tech platforms is to show that you are a leader in AI tools..."| | 29:34 | Craig Silverman | "We individually are the atomic units... So being conscious... thinking about what you reward with your attention, it’s valuable."|
Timestamps for Key Segments
- 00:30–05:05: Tech platforms roll back fact-checking and moderation
- 05:05–08:46: AI tools enable scams, impersonation, and monetized fake content
- 08:46–11:55: New moderation systems (Community Notes, etc.) and their weaknesses
- 11:55–16:54: State-backed and hustler-generated AI “slop”
- 16:54–19:32: AI slop in media events; commentary on susceptibility to slop
- 19:32–20:51: The dangers of excessive skepticism and media literacy limits
- 20:51–23:20: Incentives for slop, disappearance of moderation standards
- 23:20–24:58: International scammers and platform enforcement
- 24:58–29:27: AI bubble, tech industry resilience, and business adoption
- 29:27–30:48: Personal attention as an act of digital agency
Conclusion
The episode offers a sobering, detailed chronicle of a year when Big Tech consciously embraced “fakeness”—loosening guardrails in response to political pressure, business imperatives, and competitive fear, all while unleashing a torrent of profitable, AI-powered misinformation. Gladstone and Silverman underscore that the real cost is not just corporate or political but social and epistemological, as skepticism morphs into nihilism and truthiness reigns. Despite the bleak assessment, the show ends on a note of individual empowerment: the simple act of choosing where to put your attention may matter more than ever.
