Hard Fork – January 30, 2026
Episode Title: Tech Grapples With ICE + Casey Tries Clawdbot, a Risky New A.I. Assistant + HatGPT
Hosts: Kevin Roose (New York Times), Casey Newton (Platformer)
Episode Overview
This episode of Hard Fork dives deep into three major topics shaping tech and society this week:
- The tech industry’s fraught relationship with government surveillance and ICE violence in Minneapolis, plus the role of social networks and AI in shaping public narrative
- Casey’s risky personal experiment with Moltbot (formerly Clawdbot), a local AI “genie assistant” that runs on your computer, and broader implications for AI agent adoption
- A rapid-fire “HatGPT” game where the hosts riff on the week’s weirdest tech headlines, from Amazon’s "Project Dawn" layoffs snafu to Apple’s rumored AI wearable and more
As always, Kevin and Casey bring a sharp, witty, and occasionally irreverent tone to urgent, often unsettling news, aiming to make sense of the hype, hope, and harm in today’s tech landscape.
1. Tech's Complicated Role in ICE Violence and Public Narrative
[03:01–24:51]
The Immediate Context:
- The episode opens with both hosts expressing outrage and concern over the fatal shooting of Alex Preddy by ICE officers in Minneapolis (the second similar killing in a month), calling it a "tense and hard time in American civic life" [04:17].
- The tech industry is implicated both as the infrastructure for surveillance tools (empowering government agencies like ICE) and as steward of online platforms where violence is turned into viral content.
CEOs Respond – But Weakly:
- Sam Altman (OpenAI): Called ICE's actions "going too far" and said Americans have a duty to push back on government overreach (internal Slack to staff) [05:21].
- Dario Amodei (Anthropic): Called events a "horror" publicly on X.
- Tim Cook (Apple): Sent staff a message stating "this is a time for de-escalation."
- The consensus: Statements are activist "about the least they possibly could" out of fear of White House retaliation and internal employee pressure [06:34].
“I found it a little weak.” – Casey [06:36]
Risks for Speaking Out
- Early, sincere statements (ex: Chris Olah, Anthropic cofounder) quickly become political targets—"Look, this is how their AI models will talk"; even mild critiques can provoke White House ire [07:46].
Social Media as Propaganda Battleground
- Comparing now to post-Charlie Kirk assassination, the hosts note how the state uses spectacle and brings influencers to ICE raids, manufacturing content that pushes state views [08:59].
- Trump administration uses X ("basically state media") for both policy and propaganda [11:01].
- ICE and similar agencies have dedicated content creation teams shaping the government’s narrative [11:24].
“Winning on social media has become almost the entire point.” — Casey [12:24]
AI, Deepfakes, and the Liar’s Dividend
- AI tools are used to alter images and videos of the shootings — to depict Alex Preddy holding a gun instead of a phone, or to fake a civil rights attorney crying after arrest, muddying the public’s ability to trust anything [13:38–15:30].
"Nothing is true and everything is possible." — Casey, quoting a phrase about Russia, on the effect of state-generated disinfo [14:24]
- Vice President J.D. Vance’s retweet of an AI-altered image signals normalization of “memes will continue” as official White House strategy [15:12].
- The erosion of trust: If evidence can be fabricated, any evidence is suspect (the “Liar’s Dividend”).
Should Platforms Intervene?
- Past: Platforms (Twitter, Facebook) used to label or suppress state disinfo; now (with X), only unreliable “Community Notes” exist [15:39].
- Kevin: This is exactly why “AI companies should support regulation”—so platforms aren’t paralyzed by administration retaliation fears [16:37].
“I think this is actually the case for regulation... so you don't have to operate in this weird limbo.” — Kevin [17:30]
Phone-vs-Phone: Protesters, Surveillance, and the New Battlefield
- Both ICE and protesters use phones as tools and (for ICE agents) as threats; the government argues that filming/photographing agents counts as doxing [18:19].
- Governors urge everyone to film, the administration threatens prosecution, but it is not illegal to film law enforcement (“we want public accountability”) [20:36].
- “What are they more scared of—the guns... or is it the phones in their hands?” — Casey [21:08]
Does Video Still Prove Anything?
- AI-generated and manipulated media has eroded our belief that "the camera never lies."
- Yet, the Preddy killing was filmed from multiple angles, carefully verified, and did sway public opinion—including pro-gun groups like the NRA [23:13].
“In a very grim moment... the silver lining for me was that actually Americans do still trust video...” — Casey [24:20]
- The hosts end on a note of guarded optimism: Truth can still survive, for now, if enough evidence and scrutiny exist—even as AI tools threaten to undermine it in the future.
2. Casey’s Risky Moltbot Experiment: The AI Genie on Your Desktop
[26:30–42:11]
What is Moltbot?
- Formerly “Clawdbot” (a pun on both “Claude” and “claw”), Moltbot is an open-source personal AI agent developed by Peter Steinberger. Runs locally; connects to services like email, calendar, and can be piped into message apps [28:47].
- Can automate tasks, build daily briefings, and—in theory—act as your computer’s “wish-granting genie.”
Why Is It a Big Deal?
- Huge buzz in SF’s tech scene, with people buying extra Mac Minis to run Moltbot off their main machines.
- Prompt: “Genie, I wish for you to make me a website.” AI enables people to think beyond apps to a world of general digital assistants. [39:02]
Major Risks and Honest Experiences
- Serious security risks:
- Hackers could take over your computer via buggy messaging integration (ex: Telegram).
- “Prompt injection attacks” could lead agents to follow malicious hidden instructions on visited websites.
“Don’t do this, don’t do this, don’t do...” — Kevin [30:28] “Put it on your ex-boyfriend’s machine.” — Casey [37:56]
- Casey disables dangerous features for some safety, only hooks it up to “non-life-or-death” services, but still admits it’s not secure or reliable yet.
- Memory: More persistent than most agents. Remembers across sessions, can pick up old tasks (ex: “tweak a tool built yesterday”) [33:17].
- Practical outcome: Casey wires it to daily briefings for weather, calendar, crucial emails, plus wrestling, RuPaul's Drag Race episodes, new movie releases—“works about 70% of the time” [35:16].
- But the system often breaks; “the thing that I built did not work.” Reliability far from mainstream tools [36:24].
The Broader AI Agent Adoption Divide
- Kevin reflects on a growing “yawning inside-outside gap”:
- “People in San Francisco are putting multi-agent cloud swarms in charge of their lives... people elsewhere are still trying to get approval to use Copilot in Teams, if they're using AI at all.” [43:17]
- The tech elite is far ahead—risking more, moving faster, possibly gaining “real alpha.”
- Institutions and less technical users are way behind, sometimes for prudence, sometimes due to skepticism or regulation bottlenecks.
“If these tools wind up as productive as you’re suggesting... they’ll sell themselves. If not, expect resistance.” — Casey [47:22]
Notable Quotes
- “Whatever productivity enhancing, focus-enhancing drugs Dario Amodei is on—I want some.” — Kevin [59:21]
- “I want people to understand what the tools are capable of doing... I just worry that they're going to get left behind.” — Kevin [48:20]
- Andrej Karpathy called coding agents “the biggest change to my basic coding workflow in two decades... happened over the course of a few weeks.” [49:11]
3. HatGPT: This Week’s Wildest Tech Headlines
[52:03–71:42]
In their recurring HatGPT segment, the hosts riff on recent quirky, controversial, or absurd stories. Highlights and commentary:
Amazon’s Project Dawn Layoff Calendar Snafu [52:48]
- Amazon mistakenly invited people to “Project Dawn”—a euphemistic, sci-fi-sounding event title for their 16,000 layoffs.
“I have played, like, a sci-fi video game where Project Dawn was the effort to wipe out all humanity.” — Casey [53:26]
Caroline Ellison, FTX Executive, Released from Prison [54:27]
- Prediction: “Never have I been more confident... someone was about to start a Substack.” — Casey [54:42]
- Invite to appear on Hard Fork: “In the gay community we love a problematic queen and we believe everyone deserves a second chance.” — Casey [54:59]
TikTok Outage Triggers Censorship Panic [56:12]
- ByteDance’s American hand-off triggered outages and sudden zero-views for protest content; Gavin Newsom and celebrities allege censorship.
- Likely explanation: IT chaos, not conspiracy. “If you’ve ever moved apartments, you know some dishware gets shattered in the move.” — Casey [57:45]
Anthropic CEO Dario Amodei’s Grim AI Warning [57:51]
- His "Adolescence of Technology" essay warns AI “will test us as a species.”
- “We’re all trying to find the guy who built the Torment Nexus... I think his name was Dario something.” — Casey [58:36]
App for Quitting Porn Leaks Users’ Masturbation Data [60:29]
- Hosts gleefully pile on the puns; Casey calls for an investigation so “we can finger the culprit.” [61:38]
Alaskan Student Eats AI-Generated Art in Protest [61:46]
- “I love this so much. Move over Banksy.” — Casey [62:26]
Steak 'N Shake Invests $5M in Bitcoin [63:07]
- Acquisitive CEO, “burger-to-bitcoin transformation,” and more Midwestern nostalgia.
“I hope one day we get a big sorry from Biglari.” — Casey [65:04]
Rumor: Apple Making Wearable AI Pin [65:10]
- Pin-sized device with cameras, speaker, mic, wireless charging—seen as “vindication” for defunct Humane AI pin.
SpaceX IPO Timed to Jupiter-Venus Alignment and Musk’s Birthday [67:59]
“Your IPO is not a Tumblr post, knock it off.” — Casey [69:10]
LinkedIn to Badges for "Vibe Coding" [69:13]
- AI tools assess your “vibe coding” expertise for LinkedIn profiles; hosts roast this gamification of productivity theatre.
“Put on ... a clown nose... and look in the mirror—there’s your vibe coding badge.” — Casey [71:21]
Notable Quotes & Memorable Moments
- “Nothing is true and everything is possible.” — Casey, referencing how state propaganda erodes trust [14:24]
- “Winning on social media has become almost the entire point.” — Casey [12:24]
- “People are drawn to conflict, and so are social networks.” — Casey [08:59]
- “If enough people see it, if the video’s from enough different angles... maybe the truth survives here.” — Casey [24:20]
- “This is what it’s like to cover technology—writing about stuff that is directionally correct, way too early, and barely works.” — Casey [41:18]
Key Takeaways & Timestamps
Tech and ICE:
- 03:01–06:36: CEOs’ statements, tepid responses
- 07:46–15:30: Political risks, weaponized social media, state propaganda
- 13:38–15:30: AI-altered images, “liar’s dividend,” trust erosion
- 15:39–18:19: Platform policies and the case for regulation
- 18:19–24:51: “Phone vs. phone,” protest documentation, what’s at stake for truth
Moltbot Experiment:
- 26:30–37:44: How Moltbot works, security risks, Casey’s custom AI briefing
- 39:02–42:11: The future of “AI genies,” the early adopter/mainstream gap
HatGPT Highlights:
- 52:03–71:42: Amazon “Project Dawn”, Caroline Ellison, TikTok’s outage drama, Anthropic’s warning, LinkedIn’s vibe coding badges, and more
Tone/Style Notes
- Language/Tone: The episode maintains a journalistic, skeptical, skeptical-optimist tone—mixing dry wit, honest alarm about AI and government surveillance, and irreverent cultural one-liners.
- Chemistry: Kevin and Casey’s banter and mutual ribbing provide levity, even while covering dark or complex themes.
For Listeners Who Missed the Episode
This episode will equip you with:
- A nuanced view on how deeply tech infrastructure is implicated in state power, protest, and the shaping of public narrative.
- Firsthand insights into the bleeding edge (and sharp pitfalls) of AI agent adoption—with both sociological and security implications.
- Glimpses into how the tech elite’s early adoption could leave the mainstream behind—or just risk more spectacularly.
- A laugh (or groan) from the week’s oddest tech news, filtered through a pair of very online, very plugged-in journalists.
You’ll walk away able to follow the debates over AI, protest, and trust, as well as judge whether “Moltbot” is worth risking your laptop for—or if you should just stick to listening to Hard Fork.
