The Political Scene | The New Yorker
Episode: How ChatGPT Will Strain a Political System in Peril
Host: Tyler Foggatt
Guest: Joshua Rothman (Writer/Editor, The New Yorker)
Date: March 1, 2023
Overview
This episode delves into how ChatGPT and large language models (LLMs) are poised to transform political discourse and the way information circulates in society. Host Tyler Foggatt talks with Joshua Rothman about the technology’s capabilities, its impact on political campaigning and online culture, the potential for misinformation at scale, and the challenge of distinguishing human-authored texts from AI-generated ones. The conversation raises pressing questions about trust, authenticity, and the future of civic engagement in an increasingly AI-mediated world.
Key Discussion Points & Insights
1. How ChatGPT Works and Its Novel Capabilities
[04:18]
- Nature of LLMs: Rothman explains that ChatGPT is a large language model trained on billions of words. It generates text by predicting what comes next in a sequence, not by reasoning or having true understanding.
- "What it can do is produce text that feels intelligent, that feels creative, feels insightful, but what it can't do is... it's not actually thinking, it's just putting one word after another." (Joshua Rothman, 06:08)
- Threshold of Power: The technology has crossed a threshold in both capability and believability, creating conversations that feel real and human.
- "Most people who use it feel that it's revolutionary... you're interacting with something that feels very much like a mind." (Joshua Rothman, 06:53)
2. Transforming Political Discourse and Engagement
[07:41]
- Impact on Social Media: Rothman suggests that much of political life now unfolds online through text, and ChatGPT upends the assumption that text online is authored by actual people.
- "We've always assumed that... when we see text next to someone's name, it's been written by a person. Now... the wholesale generation of text that seems totally real...is possible." (Joshua Rothman, 08:09)
- Automation of Debate: The future may see politicians and others automating social media engagement and debate, possibly leading to AIs arguing with other AIs.
[10:12]
- Widespread Adoption: The technology will be used by everyone from average citizens to professionals and political organizations, fundamentally changing how campaigns communicate.
3. Erosion of Traditional Signals and Trust
[11:53]
- Devaluation of Volume: In areas where the volume of text (e.g., letters to politicians, comments) served as a signal of genuine engagement or opinion, AI-generation will undermine those measures—leading to less significance attributed to such signals.
- "Any context where just the sheer volume of written communication used to signify something... it will become less and less and less significant..." (Joshua Rothman, 12:00)
- Blurred Authenticity: As the distinction between human and AI writing fades, trust in text as evidence of someone’s thoughts or beliefs erodes.
- Comparison to Content Mills: Rothman likens the coming text landscape to current "content mills" online, where semi-trustworthy information is everywhere.
4. Societal Adaptation and Skepticism
[15:21]
- Navigating AI-generated Text: Foggatt and Rothman discuss whether society is already well-conditioned to doubt online information, or if the scale and sophistication of AI-generated text will overwhelm existing defenses.
- Disconnection from Reality: The internet provides more information but can feed polarization, as people select whatever facts reinforce their chosen side.
- "Our politics can feel at times disconnected from reality...we simply choose whatever information will help us feel jazzed about our side." (Joshua Rothman, 17:54)
5. Is ChatGPT Politically Biased?
[19:16]
- Rules and Guardrails: ChatGPT’s outputs are governed by safety rules (e.g., avoiding slurs, controversial topics) that some interpret as political bias ("wokeness").
- "Are there rules being put in place by the designers... that have political effects on its output? The answer's got to be yes." (Joshua Rothman, 20:01)
- Data Imprints: The model’s outputs are also shaped by the data it processes, making its "politics" a function of patterns in human language rather than intent.
6. Design of Partisan and Custom AI Tools
[21:56]
- Right/Left-leaning Bots: There are already attempts to create LLMs with particular ideological leanings, either by customizing training data or adjusting underlying model “weights.”
- "There have been various projects to create a more right wing GPT and a more left wing GPT and a socialist GPT and so on." (Joshua Rothman, 21:41)
- Interactivity vs. Passive Algorithms: Unlike passive algorithmic feeds (e.g., social media), LLMs are interactive, allowing users to shape content on demand.
7. Potential for Personalized (and Persuasive) Mass Communication
[26:04]
- Disinformation and Persuasion: Both good and bad actors will use LLMs, making misinformation even more ubiquitous.
- "I think we're going to get it on every side... there'll be a lot more auto generated and of semi factual nature." (Joshua Rothman, 26:10)
- Hyper-personalized Messaging: Campaigns could use LLMs to craft emails and messages tailored specifically to individual recipients—something no team of staffers could match.
8. Changing Job Roles and Social Norms
[27:29]
-
Job Transformation: Roles like campaign staffers, whose job is mass communication and correspondence, may shift to AI prompt-writing.
-
New Skepticism: As AI-authorship of communication becomes more prevalent (e.g., Michigan State incident), people may demand authenticity via disclaimers, or seek out in-person and video interactions as proof of reality.
- "Will we feel cheated? Will the general value we place upon that text decrease? ... If I had to guess, I'd say we won't feel cheated, but the value we place in the text will have decreased." (Joshua Rothman, 30:39)
Memorable Quotes & Moments
-
On ChatGPT’s Role in Society:
"What does it mean when software is in the mix and when ideas that we attribute to individuals...are now coming from just like this font of words? It freaks me out basically is the bottom line."
(Joshua Rothman, 16:34) -
On Skepticism and Value:
"The period of time in which we related to things we saw online as not fake is going to end because it's so easy to fake these things now."
(Joshua Rothman, 33:30) -
On Personalized Mass Emails:
"The mass communication that's personalized for the recipient is something that's almost an oxymoron today... But I don't see any reason why that won't change now that generating text is so much easier and the system is so much better."
(Joshua Rothman, 27:40) -
On Future Norms:
"At The New Yorker we're gonna have to start using disclaimers like, in the same way that a movie is like 'there were no animals harmed in the making of this film.' It's like, 'there was no ChatGPT used in the writing of this article.'”
(Tyler Foggatt, 30:12)
Timestamps for Key Segments
- [01:16] – Host and guest interact live with ChatGPT to draft podcast prompts, illustrating its process and flaws
- [04:18] – How large language models work and their limits
- [07:41] – ChatGPT’s potential to reshape political communication and the nature of online engagement
- [11:53] – Devaluation of traditional signals like letter-writing and the rise of “unlimited plausible text”
- [15:21] – Societal adaptation to semi-trustworthy information and the growing need for skepticism
- [19:16] – Discussion of political bias, neutrality, and the push to create ideologically-leaning AIs
- [26:04] – Future of AI in political campaigns: personalized persuasion, disinformation, and the impact on jobs
Conclusion
This episode presents a nuanced, sometimes uneasy analysis of how rapidly advancing AI language models may disrupt political systems and culture. The conversation points to inevitable changes in how people relate to text, information, and authority—in politics and beyond—and raises the stakes for critical thinking and skepticism in the AI era. The hosts leave listeners with a sense that we’re at the cusp of fundamental and unpredictable shifts in civic life and personal trust.