Podcast Summary: What Trump Can Teach Us About Con Law
Episode: Deepfakes and Lying Liars
Host: Roman Mars
Guest: Professor Elizabeth Joh
Date: September 24, 2024
Brief Overview
This episode explores the constitutional and political ramifications of deepfakes—realistic, AI-generated media artifacts designed to mislead, particularly in the context of US elections. Professor Joh and Roman Mars discuss the blurry lines between parody, protected political speech, and harmful deception, reviewing relevant state laws, Supreme Court precedents, and recent real-world events involving deepfakes and viral misinformation.
Key Discussion Points & Insights
1. Historical Context of Manipulated Images
- The episode opens by recounting how image doctoring is not new, citing an 1860s printmaker who spliced Abraham Lincoln’s head onto John C. Calhoun’s body for a popular engraving (04:44).
- “The Lincoln engraving tells us that the practice of doctoring photos, including pictures of the President, is anew. But there were no bad consequences from Lincoln’s mashup portrait.” (02:41, Elizabeth Joh)
2. What Are Deepfakes?
- Deepfakes use generative AI to create lifelike images, audio, and videos that never really happened, often indistinguishable from the real thing (04:44-06:56).
- AI tools have democratized the ability to create sophisticated fakes, posing risks far greater than old-fashioned photoshopping.
3. The Harms of Deepfakes
- Deepfakes have already been weaponized in election contexts internationally and domestically:
- Example: A deepfake audio of President Biden threatening war on Texas; fake images of Trump being arrested (06:58-09:55).
- Manipulated or AI images are steadily improving, with telltale “glitches” like distorted hands becoming less common.
4. Legality and Regulation of Deepfakes
- No Comprehensive Federal Law: Most US states and the federal government don't currently regulate election deepfakes directly (09:55-10:25).
- State Laws Vary:
- California’s law (since updated) bans distribution of deepfakes about a candidate with “actual malice” within 60 (now 120) days before and 60 days after an election if it could be believed by a reasonable person (10:30-11:26).
- Some states require disclosure or labeling; others attempt outright bans.
Notable Quote
“A law that would ban all deepfakes would probably violate the First Amendment. But that doesn’t mean that the government can’t regulate political speech… the government would have to show that a deepfake ban was necessary to prevent a very specific harm.”
— Elizabeth Joh (11:32)
- Laws must balance harm prevention with First Amendment free speech protections. Parody and satire, especially political parody, are strongly protected.
5. Satire, Parody, and the Supreme Court
- Hustler Magazine v. Falwell (1988):
- The Supreme Court protected a grotesque parody ad about Jerry Falwell, establishing that political satire is protected unless it includes “false statements of fact made with actual malice.” (13:54-17:01)
- California’s law, and others, often include explicit exceptions for satire/parody partly due to this precedent.
Notable Quote
“Nobody thought that it was really an interview with Falwell. It was just a joke making fun of Falwell’s sanctimonious reputation.”
— Elizabeth Joh (16:24)
6. Challenges and Practical Limits of Regulation
- Laws attempting to differentiate between malicious deepfakes and protected parody face difficulties:
- What’s the difference between harmful deception and sharp satire? (19:00)
- How can laws be enforced if the damage is viral and instant?
- Penalties are often trivial (e.g., Michigan’s 90 days in jail/$500 fine), potentially inconsequential to foreign actors or bad-faith domestic operatives.
7. Culture of Lying and Real-World Impact
- The Springfield, Ohio incident: Baseless rumors about Haitian immigrants “eating pets” were amplified by politicians (including JD Vance and Donald Trump), resulting in real-world panic, threats, and lockdowns—even in the absence of deepfakes. (21:10-24:32)
- JD Vance openly defended fabricating stories for political attention:
“If I have to create stories so that the American media pays attention to the suffering of the American people, then that’s what I’m going to do.” —JD Vance, cited by Joh (22:46)
- JD Vance openly defended fabricating stories for political attention:
8. Recent Legal Developments & Effectiveness
- California updated its law (as of Sep 17, 2024), expanding its window to 120 days before and 60 days after Election Day—hoping to prevent last-minute or even post-election deepfake disruptions (24:32-27:02).
- Cited example: A Kamala Harris deepfake widely shared by Elon Musk; after the new law, Musk re-posted it with a “parody” label in protest (25:45).
Notable Quote
“If you’re immunized by just saying, well, it’s a parody, so it doesn’t matter… Just slapping the label on seems to maybe not get to the point.”
— Roman Mars and Elizabeth Joh (26:23-26:47)
- Doubt remains about enforceability, definitions of “parody,” and the ability to stop or deter viral dissemination.
9. Reflection on Constitutional Adequacy
- Mars and Joh discuss the broader challenge: The Constitution was written assuming good faith (“the goodwill and sincerity of the people in charge,” 29:12).
- The system has difficulty countering bad actors who thrive on deception.
Notable Quote
“The Constitution doesn’t, like, have a remedy against useless assholes… Its sort of basis… is kind of like the goodwill and sincerity of the people in charge. And when you don’t have that, it really just breaks down so quickly.”
— Roman Mars (29:12)
- Deepfakes blur the line between satire and true deception, which was not envisioned by previous generations or by Supreme Court precedents crafted in a less technologically advanced era (31:51).
Notable Quotes & Memorable Moments
-
On regulation and First Amendment tension:
“Even if they are deceptive. Lies. In fact, you and I have talked before about how lies can be protected by the First Amendment.”
—Elizabeth Joh (11:32) -
Parody and the law:
“No one would have read the Hustler ad of Jerry Falwell and thought, well, did he really say that? Nobody thought that. Right. But with deepfakes, there is that moment, or maybe you never realize that it’s completely made up.”
—Elizabeth Joh (31:21) -
On the limits of the Constitution:
“It is really like its sort of basis and how it functions is kind of like the goodwill and sincerity of the people in charge. And when you don’t have that, it really just breaks down so quickly.”
—Roman Mars (29:12) -
On new legal challenges:
“The creator of that video after the law was signed by Gavin Newsom just this week, has already filed a lawsuit in federal court saying that the law violates his First Amendment rights.”
—Elizabeth Joh (30:01)
Timestamps for Important Segments
- Historical photo manipulation & Lincoln example: 00:36–04:44
- What is a deepfake? 04:41–06:56
- Election deepfake examples (Biden audio, Trump images): 06:56–09:51
- Are deepfakes illegal? Review of state laws & First Amendment: 09:51–13:51
- Hustler Magazine v. Falwell – satire, parody, malice: 13:54–17:03
- Challenges in crafting/enforcing deepfake laws: 17:03–19:00
- Practical difficulties, impact of viral misinformation: 19:00–21:10
- Springfield, Ohio, pet-eating rumor: 21:10–24:32
- Recent California law update & Musk/Harris deepfake: 24:32–27:02
- Adequacy of the Constitution to address bad-faith actors: 27:48–30:33
- Legal challenge to California’s new law: 30:33–31:21
- Parody, intention to deceive, and new legal gray areas: 31:21–32:29
Conclusion: Final Thoughts
The episode illuminates the complexity, urgency, and evolving legal landscape surrounding deepfakes and political disinformation. The intersection of technology, free speech, and constitutional norms presents unresolved challenges for lawmakers, courts, and society. The hosts close by expressing uncertainty about the future, commenting on how deepfakes and deliberate lies are testing the limits and assumptions of American constitutional governance.
