Intelligent Machines Podcast – Episode IM 819: "Put The Fries in the Bag - Chaos at the Copyright Office"
Release Date: May 15, 2025
Hosts and Guests:
- Leo Laporte: Host of Intelligent Machines and technology enthusiast.
- Jeff Jarvis: Professor Emeritus of Journalistic Innovation at the Craig Newmark Graduate School of Journalism, currently affiliated with Montclair State University.
- Paris Martineau: Tech journalist and investigative reporter.
- Emily Bender: Co-author of the influential paper "Stochastic Parrots" and co-author of "AI Con."
- Alex Hanna: Director of Research at the Distributed AI Research Institute and co-author of "AI Con."
Introduction and Setting the Stage (00:00 – 03:00)
The episode begins with Leo Laporte expressing his frustration about discussing the controversial topics surrounding AI, particularly referencing the upcoming conversation with Emily Bender and Alex Hanna, authors of the book "AI Con: How to Fight Tech's Big Hype and Create the Future We Want." The initial banter among the hosts touches on light-hearted topics like pet care, setting a casual tone before delving into serious discussions.
Leo Laporte:
"Our guests this week, Emily Bender and Alex Hanna, have written a book called the AI Con. They say it's all a con. It's all terrible. I'm gonna go crazy."
(Timestamp: [00:00])
Chaos at the Copyright Office (03:00 – 19:36)
The conversation shifts to a major news story: the turmoil within the U.S. Copyright Office. Two key figures, Carla Hayden (Librarian of Congress) and Paul Perlmutter (Register of Copyrights), were reportedly fired following the publication of a controversial report on AI's use of copyrighted materials. The report, authored by Shira Perlmutter, questioned the fair use doctrine applied to AI training data, leading to significant backlash from major tech leaders like Sam Altman and Elon Musk.
Leo Laporte:
"The pre-publication version of this said that there are some issues about AI training itself on copyrighted works. And it was their opinion that that was not fair use, which of course is the position that Sam Altman and Elon Musk and a lot of other AI people... The Librarian of Congress said, yeah, well, maybe not. At which point she was fired."
(Timestamp: [04:18] – [05:05])
Jeff Jarvis:
"Both of them, again, both of them were the Librarian of Congress and the head of the Copyright Office."
(Timestamp: [05:05])
The hosts discuss the implications of this firings, highlighting concerns over editorial independence and the potential chilling effect on AI research and development. Kathy Gellis, a prominent advocate for fair use, is mentioned for her critical stance on the Copyright Office's report.
Paris Martineau:
"And then I once again disagree that I should have the same rights that people do. I don't think that a company... should have the same rights."
(Timestamp: [05:51] – [06:08])
They delve into the broader debate over fair use, the First Amendment, and the rights of AI systems to "read" and process copyrighted material.
Leo Laporte:
"If you're not free to do that, then are you really free to read at all? Which the First Amendment says you are."
(Timestamp: [07:04])
Jeff Jarvis:
"I think it's going to be the courts that decide this, not this office."
(Timestamp: [16:30])
The discussion underscores the tension between legislative actions, judicial interpretations, and the rapid advancement of AI technologies, raising questions about the future regulatory landscape.
Surveillance and Privacy Concerns with AI (19:36 – 55:59)
The conversation transitions to the integration of AI in everyday devices, focusing on Meta’s new smart glasses rumored to have built-in facial recognition capabilities. The hosts express concerns over privacy, potential misuse for stalking, and the broader societal implications of ubiquitous surveillance technologies.
Paris Martineau:
"Like, if you're able to suddenly then just walk down the street, look at any cute girl or guy or person you want to suddenly know the address, phone number and email of."
(Timestamp: [21:46] – [22:20])
They discuss the technical aspects and potential abuses of such technologies, emphasizing the need for stringent ethical standards and oversight.
Alex Hanna:
"I am happy to talk about good and bad uses of automation, but I'm not going to talk about good and bad uses of AI because that sort of presupposes that AI is a thing as opposed to an ideological project."
(Timestamp: [56:30] – [56:36])
The hosts also touch upon a new federal bill that aims to prevent states from interfering with AI development, highlighting the ongoing legislative battles surrounding AI regulation.
Leo Laporte:
"Attorney General Bailey says this rule marks the beginning of a sustained effort to dismantle the big brother speech control machinery of corporate America."
(Timestamp: [42:41] – [43:32])
Interview with Emily Bender and Alex Hanna – "AI Con" (55:59 – 158:43)
Introduction of Guests (47:50 – 48:37)
Leo introduces Emily Bender and Alex Hanna, acknowledging their critical perspectives on AI's current trajectory and the broader implications of AI hype.
AI Hype and Its Consequences (49:32 – 72:51)
Alex Hanna:
"Synthetic text, I think is problematic. Synthetic images. So image generators... have these different features that Emily spoke about and people are losing jobs to it left and right."
(Timestamp: [59:38] – [60:39])
Emily Bender and Alex Hanna articulate their concerns about the over-hype of AI technologies, arguing that much of the current enthusiasm overshadows the real-world consequences, including job displacement, environmental impact, and ethical dilemmas.
Emily Bender:
"There are applications of machine learning that are well scoped, well tested, and involve appropriate training data such that they deserve their place among the tools we use on a regular basis... But in the cacophony of marketing and startup pitches, these sensible use cases are swamped by promises of machines that can effectively do magic."
(Timestamp: [57:24] – [59:38])
They emphasize the need for responsible AI development, transparent reporting, and ethical considerations to mitigate the negative impacts of AI advancements.
Leo Laporte:
"I believe your stance is valid, but there is some real value in these tools."
(Timestamp: [65:29] – [65:32])
Alex Hanna:
"The relationship that you have with a person... ought to be different than the relationship you have with an LLM."
(Timestamp: [62:51] – [63:00])
The guests advocate for distinguishing between human interactions and AI-generated content, stressing that AI lacks genuine understanding and accountability, which can lead to misinformation and erosion of trust in digital ecosystems.
Defining AI and Its Implications (72:30 – 89:45)
The discussion delves into the semantics of "AI," critiquing the anthropomorphic language often used to describe AI systems, which can misleadingly attribute human-like intelligence and consciousness to machines.
Alex Hanna:
"Languages are systems of sign... The meaning is not in the text. We get to the meaning because we bring in our knowledge of linguistic systems... And that's what a language model gets as its input is just the form of the text."
(Timestamp: [75:00] – [76:25])
They argue for more precise terminology to prevent misunderstanding and misuse of AI technologies, highlighting the importance of clear communication in shaping public perception and policy.
Environmental Impact of AI Technologies (89:47 – 105:07)
Emily and Alex highlight the significant environmental costs associated with AI development, particularly the energy consumption of large-scale models and data centers. They draw parallels to historical technological advancements, such as the automobile industry's environmental impact, to underscore the need for sustainable AI practices.
Emily Bender:
"Data center production is actively inhibiting the climate goals that the Paris Agreement set out."
(Timestamp: [70:13] – [71:14])
The conversation emphasizes the dual-edged nature of technological progress, recognizing AI's potential benefits while critically assessing its environmental and societal costs.
Call for Responsible AI Journalism and Regulation (105:07 – 132:35)
The guests call for a return to principled journalism practices, advocating for skepticism towards AI product claims and the promotion of accountable reporting. They stress the importance of understanding who benefits from AI technologies and scrutinizing the underlying motivations of tech companies.
Jeff Jarvis:
"Journalism has been... very credulous about what products do and why we should be wowed. We need to go back to the first principles of journalism."
(Timestamp: [83:09] – [83:30])
They argue that responsible journalism is crucial in navigating the AI landscape, preventing the unchecked proliferation of misleading information, and ensuring that AI development aligns with societal needs and ethical standards.
Interactive and Humorous Segments (158:43 – 161:10)
The latter part of the episode features light-hearted interactions among the hosts, including discussions about Sam Altman's household appliances, Nick Cage films, and playful banter about AI-generated content. These segments serve to balance the heavy discussions with entertainment, maintaining listener engagement through humor and relatability.
Leo Laporte:
"But you still got one more ad though."
(Timestamp: [158:35] – [159:02])
Uncle Tony (Guest Segment):
"It's low key a huge W to be vibing here at Westtown High School for Languages Week."
(Timestamp: [149:16] – [149:44])
These moments highlight the hosts' personalities and their ability to interweave informative content with entertaining dialogues.
Concluding Remarks and Future Directions (161:10 – End)
As the episode wraps up, the hosts reflect on the discussions about AI's role, regulatory challenges, and the importance of informed discourse. They tease upcoming content, encouraging listeners to engage with future episodes that continue to explore the evolving landscape of AI and its intersection with society.
Leo Laporte:
"Thank you, Jeff, for and Bonita, for getting Emily Bender and Alex Hannah on the show there. Book the AI con. Never invite them back."
(Timestamp: [153:34] – [153:35])
Jeff Jarvis:
"Are we capped?"
(Timestamp: [157:43] – [157:44])
The hosts emphasize their commitment to providing diverse viewpoints and fostering informed conversations about AI, ensuring that listeners are equipped to navigate the complex and rapidly changing technological environment.
Key Takeaways:
-
Regulatory Turmoil: The firing of key figures in the Copyright Office underscores the contentious debates over AI's use of copyrighted materials and the balance between fair use and intellectual property rights.
-
AI Hype vs. Reality: Emily Bender and Alex Hanna critique the exaggerated promises of AI technologies, highlighting the real-world consequences such as job displacement, environmental impact, and ethical concerns.
-
Privacy and Surveillance: The integration of AI into consumer devices raises significant privacy issues, emphasizing the need for robust ethical frameworks and regulations to prevent misuse.
-
Responsible AI Development and Journalism: There is a pressing need for principled journalism and accountable reporting to navigate the complexities of AI, ensuring that technological advancements serve societal interests without compromising ethical standards.
-
Environmental Impact: The AI industry's substantial energy consumption poses challenges to sustainability goals, necessitating the adoption of greener practices and technologies.
-
Clear Terminology and Understanding: Distinguishing between human intelligence and AI-generated outputs is crucial to prevent misinformation and foster a realistic understanding of AI capabilities and limitations.
Notable Quotes:
-
Leo Laporte:
"If you're not free to do that, then are you really free to read at all? Which the First Amendment says you are."
(Timestamp: [07:04]) -
Alex Hanna:
"Synthetic text, I think is problematic. Synthetic images. So image generators... have these different features that Emily spoke about and people are losing jobs to it left and right."
(Timestamp: [59:38]) -
Emily Bender:
"There are applications of machine learning that are well scoped, well tested... But in the cacophony of marketing and startup pitches, these sensible use cases are swamped by promises of machines that can effectively do magic."
(Timestamp: [57:24] – [59:38]) -
Jeff Jarvis:
"Journalism has been... very credulous about what products do and why we should be wowed. We need to go back to the first principles of journalism."
(Timestamp: [83:09] – [83:30])
Conclusion:
Episode 819 of Intelligent Machines offers a comprehensive exploration of the current challenges and debates surrounding AI, particularly focusing on ethical considerations, regulatory frameworks, and the societal impacts of AI technologies. Through insightful discussions with experts Emily Bender and Alex Hanna, the episode underscores the necessity for informed, principled approaches to AI development and journalism, advocating for a future where AI serves humanity responsibly and ethically.