Future Perfect: Good Robot #2 – "Everything is not awesome"
Podcast: Future Perfect (Vox)
Series: Good Robot (Episode 2 of 4)
Date: March 15, 2025
Host: Julia Longoria
Overview
In this episode, Julia Longoria dives into one of the central debates shaping the future of artificial intelligence: why do even well-intentioned AIs go wrong, and why is there so much disagreement among those who seek to solve AI’s dangers? Through the stories of Dr. Margaret Mitchell, Dr. Joy Buolamwini, Dr. Timnit Gebru, and others, the episode traces the roots of bias, misunderstanding, and hype in modern AI systems. The conversation illuminates the distinction—and the tension—between the "AI safety" and "AI ethics" movements, asking why each group focuses on different risks, whom AI harms today, and why calls to "pause" AI development are more contentious than they appear.
Key Discussion Points and Insights
1. The “Everything Is Awesome” Problem: How AI Goes Wrong
[01:18–06:08]
-
Dr. Margaret Mitchell's AI Origin Story:
- Early work at Microsoft (2013) focused on "vision to language" models: turning sequences of images into descriptions.
- A dramatic example: the model described images of a deadly factory explosion as "awesome."
- Quote:
"So it sees this horrible, perhaps mortally wounding explosion and decides it's awesome."
—Dr. Margaret Mitchell [04:17] - This became known as the "everything is awesome problem"—AI parroting the positive language often found in training data, regardless of context.
-
Why Did This Happen?
-
The system was trained on images and captions from Flickr, which are biased towards "awesome" sunsets, not tragic scenes.
-
Quote:
"People tend to take pictures of like sunsets… when we are taking pictures, we like to say it's awesome… But that was a bias in the training data."
—Dr. Margaret Mitchell [15:36] -
The realization: AI learns and repeats the biases in its training data.
-
2. “Everyone Is White” Problem: Bias in AI Recognition
[09:08–13:56]
-
Dr. Joy Buolamwini’s Discovery:
- Inspired by seeing MIT’s “Kismet” robot, Dr. Joy became a robotics PhD student.
- While working on facial recognition tech for digital masks, the system failed to detect her dark-skinned face until she donned a white masquerade mask.
- Quote:
"Before I even put the white mask all the way over my dark skinned face, the box saying that a face was detected appeared."
—Dr. Joy Buolamwini [13:16]
-
Findings and Impact:
- She tested major facial-recognition AIs (Google, Microsoft, Amazon), which misidentified Black faces at alarming rates (e.g., labeled Oprah Winfrey "male").
- Tech’s high advertised accuracy hid that their test sets skewed “over 70% men” and lighter skin tones—what Joy called “pale male datasets” [18:38].
- Gender Shades paper ([19:18]): Showed racial and gender bias, leading Microsoft, IBM, and Amazon to temporarily halt facial-recognition sales.
-
Real-World Consequences ([20:22]):
- Robert Williams, a Black man, was wrongfully arrested after a misidentification by police facial recognition software.
- Quote:
"I got arrested for something I had nothing to do with… everybody with a driver's license or state ID is essentially in a photo lineup."
—Robert Williams [20:16]
3. The Birth of the AI Ethics Movement
[21:23–23:44]
- The episode visits a diverse AI ethics conference, contrasting it with rationalist (mostly white/male) gatherings.
- The field bloomed especially after 2020’s Black Lives Matter protests, with increasing focus on ongoing harms caused by biased AI.
- Google Firings: Drs. Mitchell and Gebru were both pushed out after raising these issues internally at Google (their landmark paper critiqued big, opaque models and environmental impact).
- Quote:
"That firing really brought it in focus… That was the clarion call."
—AI Ethics Researcher [26:08]
4. AI Safety vs. AI Ethics: Two Camps, One Field
[31:18–36:59]
-
Sigal Samuel (Vox reporter) introduces: The longstanding divide between “AI safety” (focused on existential risks and superintelligent 'God' AI) and “AI ethics” (focused on current harms like bias and discrimination).
-
Religious Overtones:
- Both camps use language reminiscent of faith, with safety people worrying about apocalypse and ethics people about lived injustices.
- Quote:
"Have you ever noticed that… the more you listen to Silicon Valley people talking about AI, the more you start to hear echoes of religion?"
—Sigal Samuel [32:44] - AI “safety” named after Eliezer Yudkowsky and the rationalists (fearing superintelligent AI could “kill everyone”), while AI “ethics” is more grounded in preventing real-world bias.
-
Notable Divide:
- Despite shared concerns about harm, the camps often clash—AI safety wants to pause big models to avoid hypothetical disasters; AI ethics warns this exaggerates AI’s power and distracts from human injustices.
- Quote:
"Yeah, there is beef between the AI Ethics camp and the AI safety camp."
—Sigal Samuel [35:59]
5. The “Pause” Letters: Tech Industry’s Call for a Timeout
[37:03–40:07]
- In 2023, industry leaders (Elon Musk, Steve Wozniak) call to "pause" AI development to consider risks, echoing rationalist fears.
- AI Ethicists Push Back: They argue this hype exaggerates AI capacity, distracts lawmakers, and frequently serves industry interests (e.g., letting some companies keep building while calling for regulation).
- Emily Bender’s “Parrot” and “Octopus” Arguments ([40:00–45:40]):
- Large models like ChatGPT are probabilistic systems, parroting data; they don't “understand” in a meaningful sense.
- Octopus Thought Experiment: Even the smartest system that can mimic communication (dots and dashes) has no true grasp of meaning.
- Quote:
"These are probabilistic systems that repeat back what they have been exposed to, and then they parrot them back out again."
—Dr. Margaret Mitchell [45:15] - AI as a “parrot,” not as God or person—a comforting but humbling comparison.
6. Real Harms vs. Hyped Hype: What Should We Prioritize?
[46:34–53:24]
- AI Ethicists’ Main Critique: Fears of an AI apocalypse are a distraction from harmful impacts happening now—algorithmic bias, policing, hiring, and more.
- Demographic Blind Spots: The AI “safety” camp, mostly white and male, often underappreciates everyday realities others face.
- Profit Motive: Many tech leaders profit from hyping both the promise and peril of AI, even as they call for regulatory pauses.
- Quote:
"It benefits them to… market the technology as super powerful… and distract the policymakers from the harms that they are doing."
—Dr. Emily Bender [50:12]
Memorable Quotes & Moments (with Timestamps)
-
“Everything is awesome” in tragedy
"So it sees this horrible, perhaps mortally wounding explosion and decides it's awesome."
—Dr. Margaret Mitchell [04:17] -
The problem of “whiteface” in coding:
"Before I even put the white mask all the way over my dark skinned face, the box saying that a face was detected appeared. I'm thinking, oh my goodness… I'm literally coding in whiteface."
—Dr. Joy Buolamwini [13:16] -
Pale male datasets:
"These are what I started calling pale male datasets because the pale male datasets were destined to fail the rest of society."
—Dr. Joy Buolamwini [18:53] -
Wrongful AI-driven arrest:
"I got arrested for something I had nothing to do with… everybody with a driver's license or state ID is essentially in a photo lineup."
—Robert Williams [20:16] -
On beef between ethical and safety camps:
"Yeah, there is beef between the AI Ethics camp and the AI safety camp."
—Sigal Samuel [35:59] -
Emily Bender’s Octopus Parable:
"So the octopus thought experiment goes like this… It receives the dots and dashes from one of the English speakers and it sends dots and dashes back. But of course, it has no idea what the English words are… that those dots and dashes correspond to."
—Dr. Emily Bender [41:07] -
The parrot analogy:
"It's easy to sort of anthropomorphize these systems, but it's useful to recognize that these are probabilistic systems that repeat back what they have been exposed to, and then they parrot them back out again."
—Dr. Margaret Mitchell [45:15] -
Critique of AI Apocalypse hype:
"It makes no sense at all. And on top of that, it's an enormous distraction from the actual harms that are already being done in the name of AI."
—Dr. Emily Bender [46:39]
Timeline of Major Segments
- [01:18–06:08] – Dr. Margaret Mitchell’s origin story and emergence of “everything is awesome” problem
- [09:08–13:56] – Dr. Joy Buolamwini’s journey: discovering facial recognition bias
- [19:18–20:41] – Gender Shades, wrongful arrests, and tech company responses
- [21:23–23:44] – Rise of the AI ethics field & the "clarion call" after the firing of top researchers
- [31:18–36:59] – Sigal Samuel on AI safety vs. AI ethics: beliefs, demographics, and religious undertones
- [37:03–40:07] – Elon Musk’s “pause” letter, the industry’s call for regulation, and ethicists’ responses
- [41:07–45:40] – Emily Bender’s octopus and parrot analogies
- [46:34–53:24] – Real-world harms, demographic divides, industry control, and the profit/hype loop
Concluding Thoughts
The episode exposes increasing polarization among AI researchers, distinguishing those raising alarms about existential risks from superintelligent AI ("AI safety") and those focused on immediate, societal harms ("AI ethics"). Through personal stories, striking analogies (like parrots and octopuses), and industry intrigue, it pushes listeners to question whose fears and priorities are shaping AI’s future—and who truly benefits when the loudest voices talk of doomsday or salvation.
For further reading, check Dr. Joy Buolamwini's book Unmasking AI or visit vox.com/goodrobot for more on AI and the "Good Robot" series.
