Intelligent Machines 853: All The Clocks Were Wrong
Date: January 15, 2026
Host: Leo Laporte
Co-hosts: Paris Martineau, Jeff Jarvis
Guest: Craig Silverman (Indicator Media, expert/disinformation journalist)
Episode Overview
This episode delves into the rapidly evolving landscape of disinformation, especially as it intersects with artificial intelligence. The hosts are joined by Craig Silverman—journalist and founder of Indicator Media, credited with popularizing the term "fake news"—to discuss how AI is transforming the creation, detection, and spread of misinformation. The conversation covers real-world case studies (including AI-generated fakes and scam ads), the limitations of current countermeasures, the challenges platforms face, the history and impact of fact-checking, and practical tools for digital verification. The last part of the show pivots toward broader news and tech topics, including personalized AI coding, platform drama, and even a foray into personal security slip-ups and digital nostalgia.
Key Discussion Points & Insights
1. Craig Silverman’s Background & the Origins of “Fake News”
-
Silverman reflects on his early reporting with Regret the Error, a blog cataloguing media mistakes and corrections, and his research identifying early fake news websites motivated solely by ad revenue.
“I came across a cluster... of websites that all looked like real news websites... but everything was 100% made up... So I just, in the research report... described them as ‘fake news websites’ because that's what they were to me.” (Craig Silverman, [25:00])
-
He clarifies he "popularized" rather than coined "fake news"; Trump later adopted and redefined the term.
"I usually say I sort of popularized it, and then Trump kind of took it..." (Craig Silverman, [24:15])
2. AI’s Role in Accelerating Disinformation
-
AI has "taken the cost of generation to zero," making it trivial to create convincing fake images, videos, and documents en masse.
“It used to take time to come up with... How are we going to generate images to spread... this lie?... Now... it's a matter of minutes and prompts.” (Craig Silverman, [04:46])
-
Recent real-world case: An AI-generated food delivery worker ID and internal docs fooled journalists and Redditors, exposing both the ease of creation and the challenges of detection ([08:47]–[11:07]).
3. Detection Challenges & Digital Literacy
-
AI detection tools (e.g., watermarking, content credentials, metadata) remain unreliable and inconsistently applied across platforms. ([10:30])
-
Traditional methods like reverse image search, metadata checking, and OSINT (open source intelligence) techniques remain vital but are labor-intensive and require a mix of manual and technological skills.
“Reverse image search is a great tool... but at the end of the day, there's a lot of traditional... visual observation work.” (Craig Silverman, [17:00])
-
Indicator Media offers not just reporting but practical workshops and guides for journalists and the public to counter digital deception. ([15:18])
4. Platform Economics and Incentives
- Platforms like Meta earn billions from scam ads, creating little incentive for robust anti-misinformation measures.
“There is a money factor for the platforms ... scam ads do make them money.” (Craig Silverman, [13:43])
5. Fact-checking: Evolution and Backlash
-
The movement grew post-2016, with Meta (Facebook) funding much of the global fact-checking industry—an arrangement that both fueled scale and exposed fact-checkers to accusations of bias and dependency.
“[Meta] became the biggest single funder of fact checking in the world... but also not good for the perception of fact checking.” (Craig Silverman, [30:30])
-
Fact-checking is valuable for accountability and political deterrence (pre-Trump), but rarely changes deeply held beliefs, especially in highly polarized environments.
"Fact checking is not really good at changing deeply held beliefs. It’s better as kind of an accountability measure." (Craig Silverman, [35:43])
6. Human Behavior & Media Manipulation
-
Disinformation exploits confirmation bias and cognitive shortcuts at individual and societal levels.
“We read something that aligns with our suspicions... we're just more likely to accept it as human beings.” (Craig Silverman, [12:15])
-
The “firehose of falsehood” model, used in both spam and state-sponsored campaigns, seeks to overwhelm audiences and shape perceptions through volume and repetition—Trump’s playbook echoed this intuitively.
“You can flood the zone with [disinformation]... at some point, you just brute force the system.” (Leo Laporte, [38:13])
7. Restoring Trust and Practical Advice
-
There’s no single fix; responsibility falls increasingly on individuals to be digitally literate, skeptical, and patient.
“You need to value your attention... patience is one of the best things... don’t feel like you have to take action right away.” (Craig Silverman, [40:00])
-
Funding and surfacing reliable journalism is critical, but “locking it behind paywalls” isn’t ideal; tools must empower both professionals and motivated citizens. ([43:13])
8. AI, Copyright, and Open/Personal Software
- The latter segment covers Leo’s experiment using Claude (Anthropic’s LLM) to auto-generate custom software—an example of how AI is democratizing software creation (“vibe coding”).
- Discussion about who owns code written by AI: According to Anthropic, you (the prompter, modifier, user) own the code, provided there is human authorship/integration.
“Anthropics Consumer terms state... we assign to you all of our right title and interest, if any, in outputs, you own the code.” (Leo Laporte citing documents, [96:00])
Notable Quotes & Memorable Moments
Disinformation & AI
-
“AI is now one of the best ways to create disinformation.”
— Leo Laporte ([04:23]) -
“Social media took the cost of distribution to zero; AI took the cost of generation to zero.”
— Craig Silverman (paraphrasing Renee DiResta, [04:46])
Fact-Checking & Public Trust
-
“Fact-checkers, like many in the disinformation community, were never good at... battling and fighting back. They just want to do the work and not get caught in the battle.”
— Craig Silverman ([34:30]) -
“Fact checking is not really good at changing deeply held beliefs. It’s better as an accountability measure.”
— Craig Silverman ([35:43]) -
“We do have to do a BT and a PT—before Trump and post Trump—when it comes to facts and disinformation...”
— Leo Laporte ([37:48])
Fake News & Professional Incentives
-
“Trump decided to take ownership of the term [fake news], I would say.”
— Craig Silverman ([25:00]) -
“As much as it does rip off their users... Meta has been aware of it and I think has failed to take important steps... It’s gotta be hard if you’re making $7 billion a year to say, ‘we really shouldn’t be doing that.’”
— Leo Laporte ([14:52])
AI Coding and Software Democratization
-
“This is going to change software... I’m not writing programs for you to use. I’m running programs for me to use that are tailored to my needs.”
— Leo Laporte ([62:18]) -
“Once AI starts to improve itself... there’s no limit.”
— Leo Laporte ([82:42])
Timestamps for Key Segments
- Craig Silverman’s intro/history of “fake news”: [02:45]–[26:39]
- How AI changes disinformation/scam analysis: [04:46], [08:47], [13:25]
- Tools for detecting deepfakes/AI content & OSINT: [10:30], [16:32]–[20:15]
- Platforms’ incentives, scam ads discussion: [13:25]–[15:18]
- Indicator Media’s models/workshops: [15:18]–[17:00]
- Fact-checking: history, problems, Meta’s role: [29:59]–[34:30]
- Human susceptibility to disinfo, firehose model: [12:15], [38:13]
- Restoring trust/combating info disorder (solutions): [39:54]–[44:08]
- Ownership of AI-written code: [91:53]–[96:56]
- Leo’s AI-software demo/vibe-coding discussion: [54:15]–[79:55]
- Personal security & phishing incident story: [109:15]–[115:05]
- Debate over microplastics reporting in media: [161:21]–[167:22]
Episode Tone & Highlights
- Clever, self-aware banter: Hosts gently poke fun at each other’s “old guy” tech hangups, reference personal favorites (Nicolas Cage, cat appearances), and share failures (Leo’s phishing story).
- Empirical, practical: The guest and hosts share tested methods, real-world case studies, and practical “what can you do?” strategies.
- Cautiously optimistic: Amid warnings about scale and speed of AI-powered disinfo, there’s hope in information literacy, improved tooling, and community-based efforts.
Select Resources and Recommendations
- Indicator Media: For resources on identifying digital deception, OSINT tools, and fact-checking guides ([15:18])
- Reverse image search, metadata inspection: As basic techniques in verification ([17:00])
- Wikiflix & WantMyMTV: Open archives of public domain films and classic MTV content ([157:34], [159:12])
- Consumer Reports on tested-safe protein powders: [152:15]
- Pick of the Week: “The Traitors” Season 4 – Alan Cumming in campy reality competition ([155:49])
Final Thoughts
The episode lays out the double-edged sword of AI in the information ecosystem—making both the creation of deceit simpler and the detection harder, while also democratizing access to powerful digital tools. The guest and hosts agree: the fight against disinformation is increasingly local and personal, demanding new habits, digital skills, and a commitment to supporting credible journalism. As AI gets more entwined with daily life, society must adapt both technologically and behaviorally to avoid falling victim to the next wave of “fake news.”