Podcast Summary: On with Kara Swisher
Episode: What Everyone Gets Wrong About the Future of AI
Guest: Nick Foster, Futures Designer, author of Could, Should, Might, Don’t
Date: November 10, 2025
Recorded: Live at Smartsheet’s Engage Conference, Seattle
Overview
In this lively, insightful episode, journalist Kara Swisher interviews acclaimed futures designer Nick Foster, formerly of Google X and author of Could, Should, Might, Don’t. The discussion demystifies how businesses (and everyday people) think about artificial intelligence and challenges both hyperbolic optimism and dystopian fearmongering around AI. Foster introduces his practical framework for anticipating the future, urging a more mundane, integrated, and balanced approach to transformative technology.
Main Discussion Points & Insights
1. Why We're Bad at Anticipating the Future
[03:10–04:19]
- Foster argues that people and organizations struggle to think realistically about the future, tending toward extreme, unbalanced perspectives.
- The increasing speed of societal and technological changes exacerbates this inability to form nuanced, practical visions.
- “Our ability to [think about the future] is low and underpowered. ... I’m interested in trying to close that gap a little bit.” (Nick Foster, 03:10)
2. Inside the World of Futures Design
[04:19–06:54]
- Foster describes his self-made job title, “futures designer,” and his focus on prototyping emerging and long-term technologies.
- He reflects: early in his career, he focused on splashy, sci-fi ideas (“could futurism”) but has grown skeptical of performative, empty innovation cycles seen at tech giants.
3. The Hype Cycles of Tech (and AI’s Turn)
[07:08–09:40]
- The conversation pokes fun at the performative future-focus of Silicon Valley—Google’s bicycles, sleeping pods, and the fleeting obsession with the Metaverse.
- Kara quips about the repetitive, almost cyclical boom-bust hype: “Remember the Metaverse? No one wanted to live there, no matter what Mark Zuckerberg did.” (Kara Swisher, 09:30)
4. The “Could, Should, Might, Don’t” Framework
[12:01–13:09]
- Foster lays out four common mindsets people default to when discussing the future:
- Could: Excited, endless possibilities, often hyped or sci-fi influenced.
- Should: Data-driven certainty, predictive, action-focused, often corporate.
- Might: Scenario-planning, probability, pragmatic, but can lead to analysis paralysis.
- Don’t: Dystopian, focused on risks and consequences, but can be stifling or paralyzing.
How They Interact:
- Most individuals and organizations get locked in one mindset, leading to “biased versions of the future” (10:09).
- A balanced, integrated approach—engaging all four mindsets—is rare but crucial.
5. Applying the Framework to AI Debates
a. Could Futurism
[13:09–18:12]
- Typical for founders and tech evangelists; “full of flashy and exciting ideas” (13:09).
- Can inspire and motivate but often veers into empty marketing or sci-fi mimicry:
- “The challenge that I’ve got is it does quite quickly tip into fanciful, classical futurist tropes.” (Nick Foster, 14:39)
- “People don’t realize it also affects the way they name things... The meeting rooms at Google X were all named after sci-fi robots.” (15:50)
- Foster warns this kind of hype can be exclusionary and insular.
b. Should Futurism
[18:12–22:27]
- Common in the C-suite; uses numbers, data, and models to justify certainty.
- Foster coins “numeric fiction” for overconfident projections: “When that solid line turns to a dotted line, it ceases to be data and it becomes a story.” (Nick Foster, 19:49)
- Kara: “Frequently wrong, but never in doubt... or else just lying.”
- Overreliance on data can “ossify” products and stifle innovation—as in the stories of Nokia, Blockbuster, or Kodak.
c. Might Futurism
[23:56–26:09]
- Scenario planning, possibility mapping (“think tanks, lobbyists, government agencies”).
- Tends toward complexity and “analysis paralysis”:
- “It can just get very complex, very, very quickly and not lead you to a kind of decision—lots and lots of options.” (Nick Foster, 25:46)
- Kara: “It could also lock you into constant analysis. Right? Analysis paralysis.” (25:46)
d. Don’t Futurism
[26:12–29:35]
- Driven by risks and avoidance, “doomers, critics, activists.”
- If unchecked, can lead to despair or inaction—“ambient adolescent apocalypticism.”
- Foster: “I think the challenge with don’t futurism is if you spend too long in that space... it can become crippling.” (27:56)
- Kara criticizes tech for neglecting negative scenarios: “They don’t do enough of that.” (28:49)
6. Expert Question: Facing the Truly Unimaginable Future
[29:48–33:29] Ethan Malik asks: “How do we start thinking about the potential for an unimaginable future when we have trouble even articulating what that is?”
- Foster notes the challenge: “Well-formed ideas about the future are difficult, because we have insufficient now to stand on.” (30:23)
- He urges ongoing, communal reflection—even without clear answers: “Just doing the work is important. Sitting people down and finding space in the daily life to have that conversation... What are we building? Where is it going? What don’t we know?” (31:20)
Memorable Quotes & Moments
-
On AI Hype:
“It seems to me at this minute, it’s largely fueled by tech billionaires who are trying desperately to control it... Like, you actually don’t [want this],” (Kara, 11:19) -
On Prediction Culture:
“Once that solid line turns to a dotted line, it ceases to be data and it becomes a story. I call it numeric fiction.” (Nick Foster, 19:49) -
On Science Fiction and Tech:
“The desire to will those MacGuffins, those devices, those experiences into the world, often takes ‘could’ futurism into the realm of fantasy and sort of boyhood dreamlike places.” (Nick, 15:44) -
On the Balance of Futures Thinking:
“When you invent the ship, you also invent the shipwreck.” (Paul Virilio, quoted by Nick, 26:12) -
On Tech’s Disconnection from Everyday Experience:
“We have a habit of talking about the future again, as some other place occupied by other people, probably more heroic people than us.” (Nick, 37:08) -
On Tech and Friction:
“Friction is what makes everything interesting. ... Sex is friction. Thank you.” (Kara, 41:08)
Timestamps for Key Segments
- Framework Intro: Could, Should, Might, Don’t [12:01–13:09]
- AI & ‘Could’ Futurism [13:09–18:12]
- Corporate ‘Should’ Thinking [18:41–22:27]
- Analysis Paralysis of ‘Might' [23:56–26:09]
- The Dangers of 'Don't' (Dystopianism) [26:12–29:35]
- Expert Question – Facing the Unimaginable [29:48–33:29]
- Focus on Real People – The ‘Future Mundane’ [36:03–37:46]
- Business FOMO & The Case for Practical Experimentation [38:03–38:39]
- Human Messiness and AI’s Limits [38:57–39:39]
- Friction, Distortion, and Joy in Messiness [41:08–42:30]
- Unintended and Counterintuitive AI Consequences [42:30–44:06]
- The Dangers of Centralization & AI Bubble [45:27–46:56]
- What Healthy AI Integration Looks Like [48:09–50:39]
- AI Will Become Mundane (Electricity/ABS analogy) [51:39–52:26]
- Most Interesting AI Use Foster Has Seen [52:38–54:05] (AI-generated spoons!)
Practical Takeaways for Businesses
- Don’t get swept away by AI hype—reflect rationally on your actual needs and those of your users (“NPCs”).
- Resist single-narrative mindsets—embrace a mix of excitement, skepticism, scenario planning, and practical avoidance of harms.
- Encourage honest, company-wide dialogue about long-term goals, unintended consequences, and the mundane realities of tech adoption.
- Understand that true innovations become invisible background—aim for AI integration that feels as natural and unremarkable as turning on a light.
Final Thoughts
Foster and Swisher agree: most futures aren’t fantastic or apocalyptic—they’re ordinary and messy. Smart innovation means aiming for the everyday, acknowledging unintended consequences, and valuing the “background” roles most people play. As AI recedes from the headlines and into the background of our work and lives, its true value (and the risks) will quietly emerge.
“When we stop talking about it—when it just becomes embedded in the software, when we’re truly honest about what people really do with their software, we see it [AI] disappearing into the background. That’s when it’s working.”
—Nick Foster, 51:39
