The Lawfare Podcast: Will Generative AI Reshape Elections?
Release Date: November 15, 2025 (Archive episode from November 29, 2023)
Host: Quinta Jurecic
Guests: Matt Perault & Scott Babwah Brennan (UNC Center on Technology Policy), Eugenia Lostri (Lawfare Fellow)
Episode Overview
This episode of The Lawfare Podcast (“Arbiters of Truth” series) examines whether and how generative AI might reshape election campaigns and political information ecosystems. With the proliferation of AI-generated content and a major election year approaching, the panel delves into the actual risks, public fears, empirical findings, and policy quandaries at the intersection of technology and democracy.
Key Discussion Points and Insights
1. Report on Generative AI in Elections (06:04)
- Matt Perault introduces a new report aiming to bring empirical rigor to discussions about generative AI’s impact on political ads.
- The report focuses on alleged harms: scale, authenticity, personalization, and bias.
- The panel stresses that while many fears are overstated in public discourse, concerns like increased bias and effects on smaller/local races are potentially understated.
- Traditional interventions like watermarks and disclaimers are likely insufficient; the report advocates targeting electoral harms over technologies and investing in better data and research.
“Many of the harms have actually been overstated… But that doesn’t mean there aren’t any harms. The harms that we thought the literature suggests we should pay more attention to… are the potential use of generative AI in down ballot… as well as the harms… related to bias.”
— Matt Perault [07:22]
2. How Speculative Are the Risks? (08:25)
- Eugenia Lostri cites a recent Argentinian election where obvious AI content had little discernible impact.
- Scott Brennan concurs that the discourse is highly speculative; concrete U.S. examples (e.g., AI fighter jets in DeSantis campaign ads, fake Trump-Fauci images) have had little observable effect on voter persuasion.
- Literature suggests political ads rarely change voter choices (13:25).
“Misinformation and political ads for, you know, most of the time have limited impact on persuasion, on who we ultimately vote for.”
— Scott Babwah Brennan [10:52]
3. Do Political Ads Work—AI or Not? (12:39)
- The literature review shows little evidence that political advertisements change voting behavior.
- Ads are more likely to influence turnout, donations, or sign-ups rather than actual candidate choice.
"Our best estimate of the effects of campaign contact and advertising on Americans’ candidate choices in general elections is zero."
— Scott Babwah Brennan, paraphrasing research [13:44]
- Concern about presidential-level races may be overblown; local races with less scrutiny are more vulnerable.
“Those are the kinds of races where… the use of something like generative AI to increase the volume of problematic content might be more likely to have an effect.”
— Matt Perault [15:58]
4. Local Impact and Asymmetry of Power (16:41)
- Quinta Jurecic observes that AI attacks may disproportionately harm marginalized or local figures who lack the stature and resources to counter them.
- The panel strongly agrees, emphasizing the greater risks for less-prominent candidates and ordinary individuals.
5. Watermarks, Disclaimers, and "Learning Elections" (18:05)
- Matt Perault urges that elections be treated as opportunities for policy experimentation: let’s empirically test the effectiveness of watermarks, disclaimers, and similar interventions.
“Our fear... is that we will go through this election cycle not learning... and not unlocking and exploring some of these nuances that we hope will inform smarter public policy in the future.”
— Matt Perault [19:19]
6. The Four Main Harms (19:58)
Scale:
- AI can greatly increase the volume of content, but more content does not mean greater persuasive impact (20:34).
Authenticity:
- Most impactful misinformation uses “cheap fakes,” not sophisticated deepfakes (21:09).
- Studies cite examples like the Nancy Pelosi “slurred speech” video as more influential than photorealistic AI.
Personalization:
- AI can highly personalize ads, but its real-world effect on votes is unclear.
Bias:
- AI models may amplify existing societal and systemic biases, and this risk is currently understated (23:55).
7. Distribution vs. Generation Bottleneck (26:27)
- The real challenge for misinformation is distribution, not just creation.
- Having more falsehoods doesn't matter if there’s no effective way to get them in front of voters (27:33).
- Exception: Large, multi-platform, ideologically rooted campaigns (e.g., climate denial, anti-vax, election denial) can have significant impact if they coordinate content and distribution (29:39).
8. Platform Policies and the “Partial Solution” Debate (33:01)
- Platforms (Meta, Google) are experimenting with labels and watermarks but are doing so without standardized evidence of efficacy.
- There’s a concern that focusing solely on platform interventions draws attention away from broader structural reforms, e.g., federal laws on deceptive electoral practices, which do not currently exist in the US (31:10).
- The panel references Bertolt Brecht's play as an analogy for how partial solutions can sometimes delay or obscure deeper, systemic change.
9. Government Roles: Executive Order and States (38:32, 42:54)
- Recent Executive Orders require the Department of Commerce to develop content authentication guidance.
- Panelists argue that more formal, government-funded research is needed to determine which interventions actually work.
- Over-standardization may limit innovation in platform approaches.
"I do have a little bit of concern of moving in a homogenous direction ... it cuts out some innovation at the edges."
— Matt Perault [41:31]
- States are legislating their own solutions, ranging from narrow (mandating AI in road inspections in West Virginia) to broad (comprehensive — but not passed — bills in CA and MA) (43:18).
10. Policy Proposals & Flooding the Zone with Factual Content (45:12)
- Scott Babwah Brennan: Especially for local races, governments and stakeholders should counter “flood the zone with shit” tactics by proactively producing lots of high-quality factual content.
- Matt Perault: Recommends more funding and expertise for law enforcement and civil rights enforcement—using existing laws to pursue actual cases of voter suppression, bias, and discrimination.
- DOJ’s recent prosecution of Twitter-based voter suppression underlines the importance of having resources and technical expertise to follow up on real harms (48:19).
11. Looking Forward: Optimism, Pessimism, and Trust (49:22)
-
Matt Perault:
- On balance, generative AI's potential for empowering marginalized voices and reducing barriers may outweigh its harms.
- Worries that lack of institutional learning means we'll be asking the same questions after each election.
-
Scott Babwah Brennan:
- Optimistic about the attention policy issues are getting, but more pessimistic about the effect on public trust.
- The “liar’s dividend” (where bad actors claim real evidence is fake) may undermine faith in institutions.
- AI may further erode trust regardless of its “actual” measurable impact.
“People believe that it [AI deception] is a big problem. And that is concerning to me. We know that trust across institutions is falling and continues to fall every year. I don’t see how this cannot worsen the problem.”
— Scott Babwah Brennan [52:22]
Notable Quotes and Memorable Moments
-
“Flood the zone… with good factual content to help kind of potentially drown out or dilute, you know, efforts by bad actors to introduce deceptive content.”
— Scott Babwah Brennan [45:17] -
“It is not a violation of federal law to use deceptive practices in voting with the intent of suppressing the vote, which is astonishing…”
— Matt Perault [31:14] -
“Let’s use the 2024 election to understand that intervention [watermarks/disclaimers] better, that we use it as an experiment…”
— Matt Perault [19:19] -
“Most impactful disinformation is cheap fakes, not deep fakes.”
— Scott Babwah Brennan [21:09] -
“I guess a bit both–a bit optimistic... a bit pessimistic though about what the future will bring.”
— Scott Babwah Brennan [53:44]
Timestamps for Important Segments
- [06:04] — Overview of report on generative AI and elections
- [08:25] — International examples, speculative nature of AI harms
- [12:39] — Do political ads really change minds? Literature review findings
- [16:41] — Disproportionate impact on local races and less powerful individuals
- [19:58] — Four main alleged AI harms: scale, authenticity, personalization, bias
- [26:27] — Creation vs. distribution: what matters for misinformation?
- [31:10] — US legal gaps (federal law on vote suppression)
- [38:32] — Executive order on AI: watermarking, disclaimers, concerns
- [43:18] — State-level AI regulation
- [45:12] — Policy recommendations: flooding the zone, funding enforcement
- [49:22] — Predictions for 2024 and beyond; optimism and pessimism about AI and democracy
Conclusion
The episode offers a nuanced, empirically grounded conversation about the real and perceived impacts of generative AI on elections. While media and policy attention gravitate toward apocalyptic scenarios and flashy technological “deepfakes,” the panel argues that the most pressing concerns may be subtler—affecting local races, exacerbating bias, and further eroding trust in public institutions. There is a strong call for measured, research-driven policies and for using upcoming election cycles not just as dances with risk, but as opportunities for rigorous learning.
