Intelligent Machines 860: You Gotta Get Computer
Date: March 5, 2026
Host: Jason Heiner (filling in for Leo Laporte)
Co-hosts: Paris Martineau, Jeff Jarvis
Guest: Dan Patterson (Blackbird AI)
Episode Overview
This episode dives into a seismic week for the AI industry, centering on the explosive Pentagon-Anthropic dispute, Anthropic’s stance on military AI use, a resulting popular surge in Claude app downloads, OpenAI's controversial government deal, the cultural and business fallout, and a closer look at Perplexity's new “Computer” agent platform. Along the way, the panel discusses disinformation detection, moral lines for AI companies, and the fast-changing competitive landscape of generative AI.
Key Discussion Points & Insights
1. Blackbird AI & the Fight Against Narrative-based Disinformation
02:16
Dan Patterson explains Blackbird’s mission:
- Blackbird AI protects organizations, executives, and governments from narrative-based disinformation attacks using advanced AI.
- Goes beyond traditional “social listening” by tracing narratives across social platforms, chat apps, and the dark web, identifying both bad actors and amplification tools.
- Clients include major government agencies (e.g., NATO) and high-risk organizations.
“Perception is the attack surface.”
— Jason Heiner quoting Waseem Khaled, CEO, Blackbird AI [11:48]
13:07
Patterson describes their Constellation platform:
- Visualizes narrative attacks as “constellations”—clusters of conversation and risk signals.
- Can present organizations with clear risk levels (green/yellow/red), sometimes predicting physical danger with real-time intelligence.
16:15
On crisis management:
- Blackbird can advise to avoid engaging with bot-led or bad faith attacks to prevent amplifying them further.
- Their guidance is focused on providing actionable intelligence so clients can make rapid, strategic decisions.
“Don’t feed the trolls, don’t get involved.”
— Dan Patterson [17:36]
2. AI & Disinformation: Tools, Ecosystem, and Bias
27:06
Paris asks about guarding against AI tool bias:
- Blackbird employs a dynamic system, not simply a whitelist, and draws signal intelligence from a vast range of sources, incorporating feedback from partners like NATO.
- Engineers focus on detecting patterns, behaviors, and manipulation trends to avoid being themselves manipulated.
3. Anthropic vs. Pentagon: Setting Red Lines on AI Use
39:44, 43:18 and onward
A deep-dive into the week’s blockbuster story:
- On Friday, the Trump administration—citing national security risk—ordered all federal agencies to halt use of Anthropic technology, particularly Claude.
- Dispute stemmed from Anthropic refusing to allow their AI for fully autonomous weapons or mass surveillance, even as DoD explored using Claude for critical operations (e.g., missile defense).
- Official Pentagon responses often came via social media; the official supply-chain risk designation was not formally executed at time of taping.
Both Government and AI Company Positions:
- Anthropic’s “red lines”: No use for mass surveillance or fully autonomous lethal weapons (ok for missile defense if human-in-the-loop).
- Pentagon: Rejected contract “carve-outs”—wants standard legal language for all use cases.
- Ben Thompson (Stratechery) and Sam Altman (OpenAI): Argue that private execs shouldn't set national AI doctrine.
- Panel rebuts: Companies have a right/responsibility to set ethical use terms, especially with unreliable or experimental technology.
“Anthropic comes along and says, yeah, we have a moral line… and then they’re being accused by people like Ben of trying to be dictators. No, they’re trying to be accountable and responsible in exigent times…”
— Jeff Jarvis [60:06]
Notable regulatory perspective:
- Lt. Gen. Jack Shanahane (former director of DoD Joint AI Center) calls Anthropic’s restrictions “reasonable.”
- “No LLM anywhere in its current form should be considered for use in a fully lethal autonomous weapon system. It’s ludicrous to even suggest it.” — [62:05]
4. Public and Market Backlash: Claude Passes ChatGPT
- Following Anthropic’s stand, a grassroots surge pushes Claude to #1 AI app worldwide, displacing ChatGPT.
- Massive exodus seen in online forums; even non-technical users “boycotting” ChatGPT in favor of Claude.
- Employee discontent brewing within OpenAI and Google over their less restrictive military stances.
- Tension between user sentiment, employee values, and corporate strategy.
Notable Quotes:
“Did Sam Altman do permanent damage? Did he shoot his company in the foot?”
— Jeff Jarvis [47:35]
"It is hard to compare the two ... But one consequence ...[for OpenAI] is ... when you’re overexposed, you are increasingly likely to end up getting ... your brand reputation be tarnished.”
— Paris Martineau [47:44]
“Anthropic ... has a literal list of a thousand employees that they could possibly spoof from these.”
— Paris Martineau [55:25]
75:00 – The Deep View Poll
“Should Anthropic have acquiesced to the Pentagon’s request to remove safety restrictions?”
- 79% said no.
- Audience is largely AI professionals; panel notes how rare it is for a poll to be this lopsided.
5. Global AI, Geopolitics & Security
- Amazon data centers targeted in UAE and Bahrain, possibly as symbols of American digital power in global conflict zones.
- The Gulf region is presented as a “third center” of global AI (with US/China), but its data security frameworks weren’t designed for kinetic military risks.
“American tech now becomes a pretty clear target.”
— Jeff Jarvis [81:07]
- Companies are now as much symbols (and targets) as embassies and must balance US, European, and global expectations.
6. The Rise of Agentic AI: Perplexity Computer
- Perplexity releases “Computer”—an AI agent platform that, unlike its rivals, lets users build, deploy, and share AI-powered web apps using multiple models (Claude, GPT-4.6, Gemini, Grok).
- Removes technical barriers (no API key setup required; rapid web deployment), broadening access beyond the terminal-friendly power users of OpenClaw or Claude Code.
“Computer, mom, you gotta get on computer.”
— Paris Martineau [105:01]
- Also includes “one-shot” building of advanced apps—such as finance analysis tools rivaling a Bloomberg terminal.
7. Tools & Tips – “Picks of the Week”
- Paris: breakingthegame.net—top Scrabble strategy and theory (“helps you beat your friends, even Leo!”).
- Jeff: walkman.land—visual archive of every Walkman model; discussion of the nostalgia and significance of portable music tech.
- Also, AM/FM radio’s dethroning by podcasts in spoken word listening (Edison Research).
- Jason: WhisperFlow—AI-powered speech-to-text for Mac and mobile, enabling rapid, accurate dictation for note-taking or writing (should be a system feature!).
“You can get the stats on how fast you're going ... I can do it where I could like, say it like this and I could just whisper. And it actually works.”
— Jason Heiner [123:57]
Notable Quotes & Memorable Moments
“Perception is the attack surface.”
– Jason Heiner quoting Blackbird CEO (11:48)
“We are protecting fairly important actors and organizations from this kind of innovative new form of attack.”
– Dan Patterson [08:26]
“Sam Altman proves himself to be a two-faced traitorous ass-kisser to the government.”
– Jeff Jarvis [45:25]
“Anthropic’s red lines are reasonable... no LLM anywhere in its current form should be considered for use in a fully lethal autonomous weapon system. It’s ludicrous to even suggest it.”
— Lt. Gen. Jack Shanahane (quoted) [62:05]
“I'm both really upset and incredibly impressed. I'm upset at how good it is...”
— Jason’s friend testing Claude for the first time [66:37]
“Companies have always defaulted to being sort of morally neutral ... we make the tools, what people do with them, we can’t control.”
— Jason Heiner [90:24]
Timestamps for Major Segments
- Blackbird AI & Dan Patterson interview: [02:16] – [33:21]
- Anthropic/Pentagon showdown explained: [39:44] – [66:18]
- Public response & Claude’s meteoric rise: [66:18] – [71:14]
- War, geopolitics & Amazon targeted: [80:17] – [87:32]
- Perplexity “Computer” agent discussion: [98:01] – [111:11]
- Picks of the week (Scrabble, Walkman, WhisperFlow): [113:45] – [124:53]
Tone & Takeaways
The episode mixes urgent, incredulous, and exasperated tones (“What. A. Week!”) with humor and camaraderie, especially in offhand jabs at Leo Laporte’s absence and jokes about “Chicken and egg” influences (science fiction → real tech). The hosts reveal deep concern about AI’s trajectories, celebrate Anthropic’s stance as a rare historical outlier, and document the fascinating speed at which public opinion and market leaders can shift in the AI age (“I don’t think we’ve ever seen anything like this before…”).
Key Takeaway:
2026 may be remembered as the year when AI’s ethical boundaries, corporate power, and public sentiment collided—and when even small shifts in company policy could have global strategic and cultural consequences.
Follow-up/Related Links:
- Blackbird AI: https://www.blackbird.ai/
- Deep View Newsletter: https://thedeepview.com/
- Perplexity Computer: https://www.perplexity.ai
- Walkman visual archive: https://walkman.land
- WhisperFlow (Mac): https://whisperflow.app/