Podcast Summary
The Political Scene | The New Yorker
Episode: How Should We Approach A.I. in 2026?
Date: December 24, 2025
Host: Tyler Foggit
Guests: Charles Duhigg, Cal Newport, Anna Wiener
Episode Overview
This episode, recorded live at the New Yorker Festival, features a nuanced conversation about the current and future state of artificial intelligence (AI) and its societal, cultural, and political impacts. The panel—comprised of journalists and thinkers Charles Duhigg, Cal Newport, and Anna Wiener—explores competing narratives about AI, its uses, how society is adapting (or failing to adapt), and the challenges ahead, especially in relation to creativity, labor, and political propaganda.
Key Discussion Points & Insights
1. Competing Narratives About AI (01:14–06:22)
-
Binary Thinking:
- Charles Duhigg notes how society tends to treat new technologies in binary terms—good or bad—when, in reality, their effects are complex and nuanced. He draws a parallel to historical reactions to the telephone.
Quote: “There's this instinct to see this as good or bad. And it's much more natural for us to use this thing in a way that it's neither good or bad, it's both.” – Charles Duhigg (03:07)
- Charles Duhigg notes how society tends to treat new technologies in binary terms—good or bad—when, in reality, their effects are complex and nuanced. He draws a parallel to historical reactions to the telephone.
-
From Prediction to Pragmatism:
- Cal Newport observes that the discourse is shifting from speculative predictions (“Act 1”) to a focus on actual, pragmatic uses and limits of AI (“Act 2”). Early rhetoric likened AI breakthroughs to the atomic bomb or suggested imminent superintelligence and job extinction, but now discussion is more grounded.
Quote: “We’re now in this more pragmatic grappling phase of the new technology curve.” – Cal Newport (04:32)
- Cal Newport observes that the discourse is shifting from speculative predictions (“Act 1”) to a focus on actual, pragmatic uses and limits of AI (“Act 2”). Early rhetoric likened AI breakthroughs to the atomic bomb or suggested imminent superintelligence and job extinction, but now discussion is more grounded.
-
Social Structures Over Tech:
- Anna Wiener is more interested in the societal structures that shape AI adoption and the distribution of its benefits and harms, rather than the technology itself.
2. How the Panel Uses AI (06:22–08:39)
- Personal Use Cases:
- Charles Duhigg uses generative AI regularly, from basic queries to experimenting with “vibe coding” for mini-apps (like a local thunderstorm app).
- Cal Newport typically uses AI as an “enhanced Google” or email filter, not yet integral to daily workflow.
Quote: “I use it probably every other day… sometimes five or six times a day.” – Charles Duhigg (06:30)
Quote: “I’m mainly using it as an enhanced Google occasionally… If not for that, it probably wouldn't have a big footprint yet in my life.” – Cal Newport (06:48)
3. Barriers to Transformational AI (08:39–12:49)
-
Infrastructure and Killer Apps:
- Cal Newport and Charles Duhigg highlight that historical technological transformations (like Google’s search engine and Instagram’s native mobile app) depended less on the tech itself and more on new technical or infrastructural applications.
- There’s agreement that we haven’t yet seen the “killer app” that will make generative AI indispensable society-wide.
-
Questioning Inevitability:
- Anna Wiener challenges the assumption that AI’s transformative power is inevitable, suggesting that major societal shifts depend on unpredictable new uses and how society adapts.
Quote: “Do you guys see this as an inevitable massive breakthrough that will transform the next hundred years...?” – Anna Wiener (09:58)
- Anna Wiener challenges the assumption that AI’s transformative power is inevitable, suggesting that major societal shifts depend on unpredictable new uses and how society adapts.
4. The Path to Artificial General Intelligence (AGI) (12:49–14:22)
-
Distributed Intelligence Likely:
- Cal Newport doubts that a single system will achieve AGI. Instead, he predicts a proliferation of task-specific systems, each doing one thing slightly better than people, collectively constituting the “intelligence” we seek. Quote: “It’s not going to be one oracle that does everything for us… You multiply that by a thousand different tasks and then you look around like, oh, I guess we kind of got to artificial general intelligence.” – Cal Newport (12:49)
-
Comparing AI to the Internet:
- Charles Duhigg compares AI’s societal impact to the Internet’s, suggesting it could eventually transform almost everything—even if the pace and particulars are unpredictable.
5. Geographic and Cultural Narratives (15:09–16:04)
- East Coast vs West Coast:
- Cal Newport and Charles Duhigg note a cultural divide: West Coast (tech industry) tends to be utopian about AI; East Coast (academics) are more skeptical or “curmudgeonly.”
6. The Investment “Bubble” and Infrastructure (17:55–19:16)
- Scale of Investment:
- The current level of spending on AI infrastructure far surpasses historical major tech investments (e.g., $500 billion/year in AI vs. Apollo program’s $400B over 10 years). Panelists question whether this is sustainable or a bubble, drawing parallels to the dot-com era’s fiber optic investments.
7. The Moving Targets of Hype: Superintelligence, Automation, and ROI (21:42–25:46)
- Shifting Hype:
- The panelists track how tech leaders have shifted from “superintelligence” fears to “job automation” promises, and now to more pragmatic discussions driven by evidence that business ROI on AI is often zero in many deployments.
Quote: “The story of superintelligence, I think, is nonsense.” – Cal Newport (23:37) - Companies continually reframe their narratives to sustain interest, investment, and regulatory leverage.
- The panelists track how tech leaders have shifted from “superintelligence” fears to “job automation” promises, and now to more pragmatic discussions driven by evidence that business ROI on AI is often zero in many deployments.
8. AI, Connection, and Human Relationship (25:46–33:51)
-
Emotional Roles of AI:
- Anna Wiener discusses how people develop emotional relationships with AI—sometimes for comfort, therapy, or pseudo-social interaction. She notes that, for many, these relationships feel real, even if users rationally understand the technology’s limits.
Quote: “I think in evaluating that, it's really hard… the feelings are very real. And… that, to me is a story that runs alongside why people are… drawn to these tools in the first place.” – Anna Wiener (28:06) - Charles Duhigg shares a personal story: his son used ChatGPT for college admissions advice, not for factual information but for reassurance.
- Anna Wiener discusses how people develop emotional relationships with AI—sometimes for comfort, therapy, or pseudo-social interaction. She notes that, for many, these relationships feel real, even if users rationally understand the technology’s limits.
-
Long-term Social Effects:
- Discussion of how talking to bots may change social norms or conversational expectations, and whether sustained engagement with LLMs could “creep” into real relationships.
9. The Chat Interface: Just a Demo? (33:51–36:16)
- Chat as AOL Analogy:
- Cal Newport posits that the current chat interface for AI may just be an early, temporary phase—like AOL was to the Internet. In ten years, AI may be ambient, incorporated into many specialized apps, not just as a chatbot.
10. The Future of AI in Creativity and the Arts (38:19–42:32)
- Stigma vs. Quality:
- Currently, AI-created art and writing are seen as inferior; panelists agree that stigma might dissolve quickly as quality improves, but Anna Wiener warns this is also a labor issue—AI bypasses the countless creative workers involved in cultural production.
Quote: “I think it'll take two seconds. I think that stigma will last not very long... the standards are not that high.” – Anna Wiener (40:26)
- Currently, AI-created art and writing are seen as inferior; panelists agree that stigma might dissolve quickly as quality improves, but Anna Wiener warns this is also a labor issue—AI bypasses the countless creative workers involved in cultural production.
- Limits of AI in Creativity:
- Cal Newport cautions that we don’t know if AI will ever match high human artistry due to computational and training limitations; writing and long-form coherence may plateau far before total parity.
11. AI and Political Propaganda (42:32–44:13)
- Pessimism about Misinformation:
- All panelists express deep pessimism about the immediate future: AI-generated propaganda, deepfakes, and targeted misinformation are set to become destabilizing threats, especially since media literacy and regulatory frameworks are lagging behind.
Quote: “This is one place where I’m very pessimistic. I'm very worried that... in the next couple of elections you're going to see a lot of [AI-generated misinformation].” – Charles Duhigg (42:49)
Quote: “Just given media literacy and the political environment, the regulatory environment, I think it's incredibly sinister.” – Anna Wiener (44:15)
- All panelists express deep pessimism about the immediate future: AI-generated propaganda, deepfakes, and targeted misinformation are set to become destabilizing threats, especially since media literacy and regulatory frameworks are lagging behind.
Memorable Quotes (with Timestamps and Attribution)
-
“There's this instinct to see this as good or bad. And it's much more natural for us to use this thing in a way that it's neither good or bad, it's both.”
— Charles Duhigg (03:07) -
“We’re now in this more pragmatic grappling phase of the new technology curve.”
— Cal Newport (04:32) -
“It’s not going to be one oracle that does everything for us… You multiply that by a thousand different tasks and then you look around like, oh, I guess we kind of got to artificial general intelligence.”
— Cal Newport (12:49) -
“Do you guys see this as an inevitable massive breakthrough that will transform the next hundred years?”
— Anna Wiener (09:58) -
“The story of superintelligence, I think, is nonsense.”
— Cal Newport (23:37) -
“The feelings are very real. And so I think in evaluating that, it’s really hard… that, to me is a story that runs alongside why people are… drawn to these tools in the first place.”
— Anna Wiener (28:06) -
“I think that stigma will last not very long. ...the standards are not that high.”
— Anna Wiener (40:26) -
“This is one place where I’m very pessimistic. I'm very worried that... in the next couple of elections you're going to see a lot of [AI-generated misinformation].”
— Charles Duhigg (42:49) -
“Just given media literacy and the political environment, the regulatory environment, I think it's incredibly sinister.”
— Anna Wiener (44:15)
Important Timestamps
- 01:14: Introduction of AI’s “two narratives”—boon or doom
- 03:07: Binary thinking about technology (Charles Duhigg)
- 04:32: Shift to Act 2: pragmatic use (Cal Newport)
- 06:22–08:39: Panelists describe their current AI use
- 12:49: Cal Newport’s vision of distributed AGI
- 15:09: East vs West Coast tech optimism divide
- 17:55: The extreme scale of AI investment
- 23:37: Dismissing “superintelligence” fears (Cal Newport)
- 28:06: Anna Wiener on authentic emotional connections to AI
- 33:51: Cal Newport’s “AOL era” analogy for chatbots
- 40:26: Anna Wiener on the rapid dissolution of creative stigma
- 42:49: Charles Duhigg on AI misinformation and elections
- 44:15: Anna Wiener’s concluding pessimism about propaganda
Tone and Dynamics
The conversation is informed but measured, with bursts of levity and candor. The panelists blend skepticism with curiosity, interlacing anecdotes, expert critique, and sociological analysis while pushing each other to clarify and challenge prevailing assumptions about AI and its trajectory.
Conclusion
This episode urges listeners to look past hype cycles, recognize the persistent uncertainties and evolving risks (especially around labor and democracy), and critically interrogate not just the technology of AI, but the social structures, stories, and incentives shaping how it unfolds. As Anna Wiener puts it, “the feelings are very real”—and so are the challenges.