Intelligent Machines (Audio) – Episode 863: Fire and Ash - Hot Takes on Tech Trials
Podcast: Intelligent Machines
Host: TWiT (Leo Laporte), with Paris Martineau and Jeff Jarvis
Guest: Marshall Kirkpatrick
Date: March 26, 2026
Overview
This episode of "Intelligent Machines" explores two seismic themes in tech: (1) the latest real-world security risks and mitigations in the era of omnipresent AI, and (2) the landmark California court decision making social media companies liable for addictive product design. Along with these core issues, the show featured prolific AI innovator Marshall Kirkpatrick, deep dives into practical AI tooling, industry news (OpenAI, Anthropic, Apple), and even a live test of the guest’s new AI-powered research extension.
Key Discussion Points & Insights
1. Security in the AI Supply Chain: From Tokens to Malware
[12:00–22:43]
-
API Key and Credential Security: At the RSA security conference, Leo interviewed companies seeking to fix risky practices around API/tokens being exposed through agent workflows.
- Keycard Labs ([12:30]) and Bitwarden Agent Access SDK ([15:44]): Approaches for safely storing/securing tokens, making use ephemeral and ensuring humans stay “in the loop.”
- Notable Quote:
“Files or via chat conversations with AI agents... you're really ensuring, one, that they're end-to-end encrypted, two, that they're only accessed by humans or only access with human approval. And then the plain text credentials [are] never exposed to the actual agent.”
— Casey Babcock, Bitwarden, [16:49]
-
PyPI Supply Chain Attack ([18:46]):
- Recent supply-chain hack of the Light LLM Python package, imported by agentic AI tools (like OpenClaw), with malware stealing API keys, cloud credentials, and more.
- Urgency for developers to verify their environments; importance of better supply-chain vigilance.
- Critical Detail: Only discovered due to a bug; could have lasted weeks, affecting millions.
- Quote:
“It exfiltrates your SSH keys…AWS, GCP, Azure credentials, Kube configs, Git credentials, all ENV variables...this could be potentially a disaster.”
— Leo, [20:00]
2. "What's Up With That" – Marshall Kirkpatrick’s AI Research Extension
[28:02–43:12]
-
Tool Introduction & Demo:
- Browser extension for researchers/journalists/investigators.
- Reveals what's truly new/novel in an article, recaps personal research context, tracks topics and builds causal chains across multiple reads.
- Integrates scientific literature search, conversation memory, and custom user "playbooks."
- Privacy-conscious: Only analyzes on click; key-value memory stored locally or in secure cloud.
- Quotable Moment:
“All of us who read, write, and think for a living need a toolbox to help level up...I wanted one for myself and I wanted to offer that to others as well.”
— Marshall Kirkpatrick, [40:00]
-
Live Test: Used on news article re: Meta/YouTube trial (bellwether addictiveness lawsuit).
-
Technical Stack:
- Uses multiple AI models (Anthropic’s Haiku for summarizing, Sonnet for analysis, with fallback to GPT-5/Perplexity/Mistral, etc.) [6:20–7:37]
3. Landmark California Case: Social Media Addictiveness on Trial
[47:17–73:50]
-
Case Overview:
- Plaintiff (started using YouTube at 6, Instagram at 9) alleged social media products led to years of harm (self-harm, depression).
- Jury found Meta/YouTube liable for negligent design; $4M and $1.8M damages respectively—not huge amounts, but a legal sea change.
- Section 230 Workaround: Lawsuit targeted product design, not user-generated content; bypassed ‘publisher’ immunity. Compared to tobacco/asbestos legal strategies.
- "Bellwether" Case: Sets precedent for batch settlements in the ongoing multidistrict litigation (MDL) with hundreds/thousands of similar suits.
-
Debate on Blame & Broader Impacts:
- Was this just "jury's emotional response"? (Jeff)
- Could these verdicts usher in a new era of tort liability for AI or social platforms? (Paris)
- Parental responsibility vs. corporate algorithmic intent.
- Internal docs showed executives intentionally pursued designs targeting maximum user engagement among youth.
-
Notable Quotes:
-
“The incentive to build addictive, short-term optimized stuff is baked into the whole system.”
— Marshall, [58:01] -
“The thing that ended up sinking…Meta and YouTube is… internal documents that showed Meta was trying to hook young users and get them as young as possible.”
— Paris, [61:21]
-
4. OpenAI, Anthropic, and the “AI Model Wars”
[74:00–86:16]
- OpenAI Shuts Down Sora: Video model doesn’t pan out despite Disney partnership. Refocuses on enterprise API products (Anthropic is booming on that front).
- Enterprise Battles:
- Anthropic overtakes OpenAI in new enterprise adoption ([76:02]), attributed to better privacy and reliability.
- OpenAI's financial struggles, offering “guaranteed” 17.5% returns to PE firms in a move interpreted as desperation.
- “Claude has all the mindshare right now.” — Leo [77:35]
- AI Model Fungibility:
- “How fungible are these models? The US government swapped from Anthropic to OpenAI very quickly.”
— Ongoing question [84:21]
- “How fungible are these models? The US government swapped from Anthropic to OpenAI very quickly.”
5. Using AI to Analyze the Show… and Ourselves
[87:04–104:41]
-
Marshall’s AI “Meta-analysis”:
- Claude code aggregates and analyzes 18 months’ worth of "Intelligent Machines" episodes.
- Finds:
- Leo as a “self-described accelerationist” with frequent hyperbolic claims, but with consistent self-awareness and skepticism
- Paris as “the empirical check”
- Jeff as “the philosophical anchor” (“most stable position across 18 months”)
- Used as an example/demo for the power of specialized AI research loops.
-
Memorable Moment:
- “What did Leo say at 02-07-2025? ‘I am an accelerationist’ — the first explicit declaration.”
— Marshall/Claude, [95:03] - “It’s show business, man!” — Marshall, on taking both sides and the show’s rhetorical style, [103:58]
- “What did Leo say at 02-07-2025? ‘I am an accelerationist’ — the first explicit declaration.”
6. Apple, Google, and the Coming AI Model on Your Phone
[108:14–114:42]
- Technical Advances:
- Google’s TurboQuant: Model compression w/ “zero” perceived loss in accuracy [108:14].
- Apple soon to announce on-device Gemini-distilled AI for Siri at WWDC. Distilling huge cloud models down to efficient edge models on iPhones [109:49].
- Question of practicality: “Will it be better than Siri’s last 10 years of disappointment?” (Paris, [111:30])
- Ongoing skepticism over Apple’s “liquid glass” UI choices and nostalgic design critiques.
7. AI’s Impact on Web & Search
[114:58–117:00]
- Decline in Web Traffic from Search:
- AI search has not compensated for lost Google referrals; smaller publishers hit hardest.
- Google now rewriting headlines for “personalized” results—sparks controversy.
8. News Roundup & Picks
[131:39–149:59]
- Notable Tech Obituaries:
- Tracy Kidder (“The Soul of a New Machine”) and Paul Brainerd (creator of PageMaker/desktop publishing) [132:12–136:36].
- Nerdy Tools:
- Regex Blaster (regex game) as Leo’s pick ([158:02]).
- Marshall’s favorite prompt: “Explain this in 3 hops” for AI clarity ([144:32]).
- Esoteric Ebb (Disco Elysium-style single-player RPG game) as Paris’s pick ([151:21]).
- Side Notes:
- Helium shortages and chip fabs (Elon Musk’s announced “Terrafab” is met with skepticism).
- Funny moments about “Secretly British” domain, grits recipes, pickled Amazon deliveries, and more.
Notable Quotes & Time Stamps
- “The incentive to build addictive, short-term optimized stuff is like baked into the whole system.” (Marshall, [58:01])
- “The dopamine response is the same as wearing glasses… or enjoying a good book.” (Jeff, [61:12])
- “A court case could say you can’t use an ad blocker…” (Leo, [117:00])
- “All of us who read, write, and think for a living need a toolbox to help level up what we’re doing.” (Marshall, [40:00])
- “On 60% of the shows, Laporte emerged as a self-described AI accelerationist… making hyperbolic claims in 45 of 75 episodes, though consistently leavened by genuine skepticism…” — Claude AI’s analysis ([92:53])
Segment Timestamps
- 00:00 – 09:08: Banter and Introductions
- 09:08 – 22:44: RSA Security Conference Recap, API Key Security Interviews
- 22:45 – 28:02: PyPI Malware Outbreak & Supply Chain Fears
- 28:02 – 43:12: Marshall Kirkpatrick Interview & "What's Up With That" Demo
- 47:17 – 73:50: Social Media/Addiction Lawsuit Deep Dive
- 74:00 – 86:16: OpenAI, Anthropic, and AI Model War Discussion
- 87:04 – 104:41: AI Transcript Analysis Meta-Demo
- 108:14 – 114:42: AI Model Compression, Apple/Google/WWDC Rumors
- 114:58 – 117:00: Web Traffic, Google AI Headlines
- 131:39 – 136:36: Obituaries: Kidder & Brainerd
- 144:32 – 159:01: Picks of the Week: AI Prompts, Games, Tools
Note: Skipped ad sections, outros, and extended banter not related to the discussion points.
Episode Tone & Dynamic
The show maintains a nerdy, playful vibe—mixing deep technical rigor (especially around AI security, supply chain, and legal models) with genuine curiosity, pragmatic skepticism, and a healthy dose of self-deprecating humor. The hosts aren’t afraid to probe each other’s positions, and illuminate the messy, unpredictable implications of AI’s societal impact—grounded in lived experience and expert dialogue.
Summary
This episode bridges the twin axes of AI’s power and peril: (1) practical, ever-shifting security risks in the age of rapidly composing agentic systems; (2) society’s legal reckoning with “addictive” AI-powered design, as the courts breach Section 230’s wall for the first time. Marshall Kirkpatrick’s inventive research tools show the positive side of narrow, human-augmenting AI, while the lively trio of hosts ground the conversation in real-world experience, industry happenings, and timely skepticism. Above all, the show demonstrates the importance of critical analysis, both of this new tech and ourselves, as we collectively navigate what might well be, for better and worse, the most consequential revolution of the 21st century.