Podcast Summary
Podcast: New Books Network
Episode: The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want
Date: April 13, 2026
Host: Jeffrey Hurley Humera (with co-host Alex Rivera Cartagena)
Guests: Alex Hannah (Distributed AI Research Institute), Emily Bender (University of Washington)
Book Discussed: The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want (2025, Harper)
Overview
This episode features an in-depth interview with Alex Hannah and Emily Bender, co-authors of The AI Con, a critical book that dissects the myths, harms, and power structures behind modern artificial intelligence. Through a lens rooted in linguistics, sociology, and decolonial thought, the conversation explores AI's marketing hype, its impact on human relationships and learning, embedded biases, and the societal responsibilities of universities and public institutions. The discussion is particularly attentive to Puerto Rican cultural, linguistic, and civic contexts.
Key Discussion Points and Insights
1. Origins and Motivations Behind the Book
[07:17–10:32]
-
Emily Bender: Her background is in linguistics and computational linguistics, especially understanding how language models work and why they don't truly "understand" language. She emphasizes her realization, earlier than most, about AI's societal risks through academic and public conversations.
"Coming at it with some disciplinary knowledge… about how language models work, how we know they don't understand, but why it is that they seem to understand what we're saying." (07:35, Emily Bender)
-
Alex Hannah: A sociologist focused on social movements and technology, initially engaged with computational tools for social science but grew increasingly concerned over impacts on marginalized communities, especially concerning data sets and labor practices.
"The more I got enmeshed in this, the more I realized how much different institutions are interested in these types of tools and how they could be used for things like surveillance and harms to marginalized communities." (08:49, Alex Hannah)
-
Their professional relationship started online, producing joint research and eventually a podcast that led to the book.
2. AI Hype and Its Real-world Effects
[05:05–06:56], [13:06–15:06]
-
The authors and hosts critique the marketing narrative that frames AI as inevitable and universally beneficial, exposing how this hype serves corporate interests and masks labor exploitation, environmental costs, and social harms.
-
Emily Bender: Expresses frustration over university administrations uncritically adopting AI as the future, reshaping curricula at the expense of genuine intellectual engagement.
"It's extremely frustrating... to watch my institution... getting super excited about this technology and saying we have to reshape everything we're doing to... prepare our students for the AI workplace or whatever, when it's all just marketing..." (13:06)
3. Universities, Learning, and the Threat to Human Interaction
[06:14–08:30], [13:06–18:44]
-
The panel reflects on how algorithmic tools replace deep, slow, and messy intellectual work—such as struggling with ideas, grappling with confusion, and developing one's voice—flattening learning to artifact production.
-
Jeffrey Hurley Humera: Worries that “as automated systems become the default co-conditions of learning,” shared intellectual life, mentorship, and attention are eroded.
-
Alex Hannah: Links current AI trends to pre-existing neoliberal shifts in universities (adjunctification, modularization, consumerization of education), warning that AI amplifies these harmful trajectories.
"We're in a prime neoliberal era...the reduction of state funds, casualization of academic labor and modularization of curriculum...coming at the harm...to students because now...they are engaged in this very product-oriented, consumer-oriented relationship." (15:06)
4. AI's Embedded Racism, Classism, and Eugenicist Roots
[18:44–28:19]
-
Alex Rivera Cartagena: Cites Ivan Illich, noting that “the real privilege is not using these technologies...defense against the damages inflicted by development...become[s] the most sought after privilege.”
-
Alex Hannah: Delivers a history lesson on how AI’s concept of “intelligence” originated during the Cold War and was always entangled with eugenics and scientific racism—via the quantification and ranking of human ability.
“The term AI...is a marketing term...intelligence...has a very long and eugenicist history...metricization of intelligence leads down that path quite quickly.” (20:37)
-
Training data reflects social biases, and deployments serve to further marginalize those with fewer resources—they are first to get AI “solutions,” which are typically cheaper, lower quality, and less humane (e.g., algorithmic tutors vs. real educators, automated social services vs. personal assistance).
"[Students in poorer districts] had to type in everything...Meanwhile, in private schools...there is much more hands-on instruction...AI is deployed in this racialized class way that really is injurious." (25:42)
-
Emily Bender: Summarizes how any ideal of ranking or comparing “intelligence” is fundamentally rooted in racist, eugenicist ideology, and using such frames—even to describe computers—reinforces that tradition.
"The very idea that you can rank people according to one property...has eugenicist and racist roots through and through." (27:43)
5. Language, Data Colonialism, and Puerto Rico’s Position
[28:19–32:45], [32:45–36:44], [47:37–49:00]
-
Jeffrey Hurley Humera and guests discuss how language technologies (like translation, search, LLMs) reflect the biases and access inequalities in their training data, threatening linguistic diversity.
"This AI circumstance is scraping all...those things [from the web], and your book made me think a lot about Puerto Rico..." (28:19)
-
Puerto Rican traditions of civic participation, democratic dialogue, and multilingualism (Spanish, English, Spanglish) are assets—but also vulnerabilities as Big Tech seeks to “philanthropically” offer AI solutions, which are never truly “free.”
"Oftentimes the imposition of technology is presented as if it were philanthropy...But the thing is, it’s not really free because...it is, among other things, a way for these companies to gain access to data..." (30:18, Bender)
-
Alex Hannah: Data colonialism is a real threat, with Big Tech using “lax regulatory environments” (seen globally and in PR) for data extraction and infrastructural grabs (e.g., data centers).
"Thinking about how those [colonial] pathways get reinscribed...there’s specific land and water and energy grabs...we’re kind of seeing this in PR to some degree with...the crypto community..." (32:45)
6. Resistance, Opportunities, and Practical Guidance
[38:53–42:45], [43:57–47:37]
-
Cultural resilience: After Hurricane Maria, the question “What have you been reading?” replaced tech-driven social interaction, highlighting the value of analog, communal knowledge-sharing.
“What have you been reading? Is beautiful. And one question is how do we get there without a natural disaster?” (38:53, Bender)
-
On bans and tech limitation for youth: While some countries restrict social media for those under 16, both Emily and Alex favor cultivating authentic connections and holding tech companies accountable, rather than outright bans.
“I think that pushing more on supporting connections, supporting authenticity and then in fact requiring accountability of the tech companies rather than banning young people...is going to be a more effective direction.” (42:15, Bender)
-
How Students and Communities Can Apply Lessons from The AI Con:
- Identify where AI is being imposed in daily life (school, work, social services)
- Ask critical questions about any “AI” system encountered:
- What’s actually being automated?
- What are the inputs/outputs?
- How is it evaluated, and by whom?
- Is it being anthropomorphized? Why?
- What are the underlying labor and data practices?
- Who is accountable when things go wrong? Who benefits?
- Use Puerto Rico’s unique civic and linguistic culture as a basis for resistance and democratic engagement.
“The thing that makes us so great in the Puerto Rican context is the vibrant civil society...already formed groups...really powerful pathways for political education and for...integration of society and tech that works for people.” (43:59, Hannah)
-
Linguistic Distance as an Advantage:
- Rather than trying to make harmful technologies work “better” for local dialects, leverage the fact that AI doesn’t serve Puerto Rican Spanish well as a kind of healthy resistance.
“Rather than focusing energy on getting this technology...to work better for...[local Spanish], I think it’s worth more using that distance as an advantage and saying...Let's look further into what the possible harms are and what questions we should ask before using it.” (47:37, Bender)
Notable Quotes & Memorable Moments
-
On AI as an empty promise:
"...there is nothing intelligent about artificial intelligence. It is only that, an artificial emptiness." (06:02, Alex Rivera Cartagena)
-
On bureaucratic acceptance of AI hype:
"Do these people actually not want to do their jobs? Like, why are you here if you aren’t going to be in the business...of helping [students] grow as scholars?" (13:46, Emily Bender)
-
On entrenched bias:
"Anytime someone says that their computer is intelligent at some level, on some imagined scale of people...they are referencing that same racist eugenicist concept." (27:43, Emily Bender)
-
On Puerto Rico's unique resilience:
"What have you been reading? Is beautiful...How do we get there without a natural disaster?" (38:53, Emily Bender)
-
On practical resistance:
"Let’s look further into what the possible harms are and what questions we should ask before using it." (47:37, Emily Bender)
Timestamps for Key Segments
- Backgrounds & Path to the Book: 07:17–10:32
- State of AI Hype & University Reactions: 05:05–06:56, 13:06–15:06
- Learning, Struggle, and Human Connection: 13:06–18:44
- Roots of Racism, Classism, Eugenicism in AI: 20:37–28:19
- Language, Data Colonialism, and Puerto Rico: 28:19–36:44, 47:37–49:00
- Cultural Resilience and How to Resist: 38:53–42:45, 43:57–47:37
Conclusion
This episode provides a nuanced, critical, and context-rich conversation about AI’s societal effects, the myths perpetuated by Big Tech, and the dangers of ceding educational and cultural authority to algorithmic systems. Drawing on Puerto Rican experiences and broader histories of resistance, the guests and hosts stress the importance of skepticism, solidarity, and local agency in shaping a technological future that serves human flourishing—not corporate profit.
