Podcast Summary: Autocracy in America – "The Computer Scientist"
Host: Garry Kasparov
Guest: Gary Marcus
Date: September 5, 2025
Podcast by: The Atlantic
Overview
This episode explores the current and potential impact of artificial intelligence (AI) on American society, democracy, and politics. Host Garry Kasparov, world chess champion and democracy activist, is joined by cognitive scientist and AI critic Gary Marcus. Their discussion weaves personal chess history, AI’s technological evolution, the dangers of both the technology and those controlling it, and how democracy can resist or succumb to technological autocracy.
Key Discussion Points & Insights
1. Human vs. Machine: From Chessboards to Chatbots
- Kasparov’s Chess vs. Deep Blue: Kasparov recounts his battles against IBM’s Deep Blue, framing the human-machine dynamic.
- Quote: “In 1985, ... I beat all 32 [chess computers] ... But just 12 years later, in 1997, I was ... fighting for my chess life against just one machine, a $10 million IBM supercomputer ... Newsweek’s cover called it the Brain’s Last Stand.” ([01:08])
- AI: Hype and Reality: AI, now far beyond chess, inspires both utopian dreams and dystopian fears.
- Quote: “AI is still just a tool ... it is not a promise of dystopia or utopia. ... it is how we use it for good or bad.” ([01:55])
2. The Illusion of Machine Intelligence
- Brute Force vs. Understanding:
- Early chess computers relied on brute force computation; today’s large language models (LLMs) still depend on data aggregation, not understanding.
- “Large language models can perform complex tasks ... but are these machines intelligent? ... It is still brute force.” ([05:31])
- LLMs can recite rules but don’t understand them—shown in their inability to follow basic chess rules consistently.
- Quote: “If you ask a large language model, even a recent one, to play chess, it will often make illegal moves. That’s something a six year old child won’t do.” ([07:18])
- Gary Marcus: “...when it actually comes to playing the game, it doesn’t have an internal model of what’s going on.” ([08:14])
3. Skepticism, Realism, and the Alignment Problem
- AI Realist, Not Skeptic:
- Both guests reject hysteria, focusing on honest assessment.
- Quote: "I love that you called me an AI realist rather than a skeptic." – Gary Marcus ([09:52])
- Both guests reject hysteria, focusing on honest assessment.
- AI as Dual-use Technology: Useful or dangerous depending on who wields it.
- Kasparov: “Humans still have monopoly for evil ... every technology can be used for good or bad, depending on who uses it.” ([10:59])
- Accidents and misuse—rather than intentional machine malice—are the present concern.
- Marcus: “It will just do really bad things by accident because it’s so poorly connected to the world.” ([11:22])
4. On ‘Alignment’ and the Limits of Scale
- Alignment Problem: Machines still can’t reliably do what we want; feeding more data isn’t fixing this.
- Quote: “Not even close. ... We have nothing like a real solution to alignment.” ([14:37])
- Quantity (more data) does not transform into quality (true understanding or ‘superintelligence’): “We will get to superintelligence eventually, but not by just feeding the beast with more data.” ([15:45])
- Field Critique: The current AI field is rife with hype, lacking the intellectual honesty of its early days.
- “...now you just have people hyping stuff, praying...” ([15:45])
5. AI for Good: What’s Possible and What Isn’t
-
Best AI Applications: Highly specialized systems like AlphaFold (for protein folding)—not general chatbots—offer real, concrete advances.
- Quote: “The best AI for helping people ... is not the chatbot. The best piece of AI right now ... is AlphaFold.” ([18:03])
-
Responsible Progress: The US is currently failing to implement regulatory or legal frameworks for safe AI development.
- Marcus: “If we want AI to be net benefit to society, we have to figure out how to use it safely and justly. ... when we’re doing nothing, then... negative consequences.” ([18:46])
6. AI, Politics, and Propaganda
- AI and Disinformation:
- AI’s most alarming social effect: supercharging propaganda, fake news, and “information wars”—directly threatening democracy.
- Kasparov: “That’s where AI plays a massive role ... the sheer power behind them could at one point decide the results of any elections.” ([19:36])
- Path to Fact-Checking AI: Marcus is (cautiously) optimistic that future AI could fact-check at scale, but currently political will is absent.
- Quote: “We can build AI that could do fact checking automatically, faster than people ... But ... the far right has so politicized the notion of truth that it is hard to get people to even talk about it.” ([20:17])
- Historical lesson: The US once overcame a similar crisis with the rise of fact-checking in response to yellow journalism.
- Current Climate: Political leaders on all sides have little interest in defending truth, preferring expedient misinformation.
- “No political ... force... interested of defending the truth ... [because] it may ... interfere with their political agenda.” ([21:52])
7. Orwell’s Warnings and Techno-fascism
- SCI-FI Warnings Realized: The dystopian fears of Orwell and other sci-fi writers—in which technology enables an elite to control minds and outcomes—are now manifest.
- Quote: “We are exactly where Orwell warned us about, but with technology that makes it worse. Large language models ... can persuade people ... without people even realizing...” ([24:00])
- Techno-oligarchy in America:
- Marcus argues the US already exhibits “techno-fascism,” with intent to surveil citizens, consolidate power, and even replace federal workers with AI.
- Quote: “That’s exactly what’s happening in the United States right now is techno-fascism.” ([25:21])
8. Resistance and the Power of the People
- Apathy as the Default:
- Most people prioritize convenience and have relinquished privacy and power to tech companies.
- Gary Marcus: “IPhones are the opiates of the people.” ([26:31])
- Potential Tools for Resistance:
- The same levers as always—strikes, boycotts, mass action—could rein in tech, but few are motivated.
- Marcus: “We could all say, look, we’re not going to use generative AI unless you solve some of these problems ... We could boycott it.” ([28:58])
- Structural Problem with Mass Action:
- Kasparov’s skepticism: It’s unlikely students and others will willingly give up convenience (e.g., ChatGPT for homework), even if the consequences harm them long-term.
- Marcus: “It’s very unlikely. ... The students ... have given birth to this monster because they drive the subscriptions up.” ([30:08])
- Kasparov’s skepticism: It’s unlikely students and others will willingly give up convenience (e.g., ChatGPT for homework), even if the consequences harm them long-term.
9. Outlook: Knife’s Edge or Call to Action?
- Not Hopeful by Default: Marcus describes himself as agnostic: “It’s important to realize that we still have choice. It’s not all over yet. We still have some power ... but it is not the default. ... unless we really do stand up for our rights.” ([31:29])
- Quote: “We are America, and we still could. And we should. Our fate rests 100% on political will.” ([32:22])
- The episode ends with a call for civic activism over resignation.
Notable Quotes and Memorable Moments (with Timestamps):
-
Brute force vs. intuition:
“If you ask a large language model, even a recent one, to play chess, it will often make illegal moves. That’s something a six year old child won’t do."
– Gary Marcus ([07:18]) -
Meaningless Data Mountain:
“We will get to superintelligence eventually, but not by just feeding the beast with more data.”
– Gary Marcus ([15:45]) -
Caution over Hype:
“Now you just have people hyping stuff, praying. There’s actually a great phrase I heard, pray and prompt ... the whole field is built on that these days.”
– Gary Marcus ([15:45]) -
On Fact-Checking AI:
"I genuinely believe that in principle we can build AI that could do fact checking automatically, faster than people. ... Not current AI, but future AI could actually do that at scale. ... Part of it is political will and right now we lack it."
– Gary Marcus ([20:17]) -
Modern Techno-Fascism:
"That’s exactly what’s happening in the United States right now is techno-fascism ... The intent appears to be to replace most federal workers with AI ... surveil people, ... accessible to a small oligarchy."
– Gary Marcus ([25:21]) -
On Civil Resistance:
"We could all say, look, we’re not going to use generative AI unless you solve some of these problems ... right now, the people making generative AI are sticking the public with all of the costs ..."
– Gary Marcus ([28:58])
Timestamps for Key Segments
- Chess and AI Background ([01:08]–[04:05])
- Brute Force and AI Limitations ([05:16]–[09:35])
- Skepticism vs. Realism ([09:35]–[11:22])
- Alignment and Superintelligence Debate ([14:21]–[16:23])
- AI for Good – AlphaFold Example ([18:03]–[19:36])
- AI, Propaganda, and Politics ([19:36]–[24:31])
- Tech Oligarchy & Techno-fascism ([24:31]–[28:58])
- The Challenge of Resistance ([28:58]–[32:29])
Tone and Language
The conversation is candid, incisive, and direct, with both Kasparov and Marcus blending humor, skepticism, and urgency. They cut through technological hype, focusing on risks, ethical dilemmas, and the need for civic action. Kasparov is pragmatic and sometimes wry; Marcus is passionate but realistic, with flashes of optimism grounded in historical memory.
Conclusion
The Computer Scientist presents a bracing assessment of AI’s real influence on democracy, warning against both panic and complacency. The episode closes with the imperative that America’s fate—autocratic or democratic—depends not on the machines, but on the political will of its citizens.
Most Memorable Call to Action:
"We are America, and we still could. And we should. Our fate rests 100% on political will."
— Gary Marcus ([32:22])
