Radiolab – "Driverless Dilemma"
Original Air Date: September 26, 2017
Hosts: Jad Abumrad, Robert Krulwich (WNYC Studios)
Special Guests: Nick Bilton (Vanity Fair), Josh Green (Harvard, formerly Princeton), Raj Rajkumar (Carnegie Mellon), Michael Taylor (Car and Driver), Bill Ford Jr. (Ford)
Episode Overview
The "Driverless Dilemma" episode explores how the classic philosophical trolley problem is no longer just an abstract question, but a pressing real-world issue with the rise of self-driving cars. Hosts Jad Abumrad and Robert Krulwich revisit their earlier work on moral decision-making in the brain, then connect it to the ethical challenges faced by programmers and society as autonomous vehicles become a reality. The episode navigates the collision between ancient instincts, moral philosophy, neuroscience, and cutting-edge technology, ultimately raising urgent questions: Who decides how machines make life-and-death decisions, and how should we as a society grapple with this new power?
Key Discussion Points & Insights
1. Automation Anxiety & the Coming Disruption
[02:05] Nick Bilton (Vanity Fair) voices deep concern about automation and AI, especially driverless cars, predicting massive impacts on employment and society:
- Not just drivers, but whole industries (insurance, rest stops, hotels) are at risk.
- Envisions car interiors evolving: why five seats and a wheel if nobody drives?
- "Pizza delivery drivers are gonna be replaced by robots that will actually cook your pizza on the way to your house." – Nick Bilton [03:45]
- Raises the specter of society-wide effects, urging listeners to confront the full scale of change.
2. The Trolley Problem: Moral Thought Experiment
[05:35] Jad and Robert re-enact the classic trolley dilemma:
- Scenario A: Pull a lever to divert a train, killing one worker to save five.
- Scenario B: Must push a large man off a bridge to stop the train, saving five but killing him.
- Most people would pull the lever (indirect harm), but refuse to physically push the man (direct harm).
- "Nine out of 10 people will say, yes, yes, yes... to the lever question... But nine out of ten people will say, no, no, never... to the pushing." – Robert Krulwich [07:53]
3. Neuroscience of Morality – Inside the Human Brain
Meet Josh Green (then Princeton, later Harvard): neuroscientist/philosopher interested in the biological roots of morality.
- Green used fMRI scans to examine how people make decisions in trolley-like dilemmas.
- Logical brain circuits activate when contemplating the lever scenario (impersonal harm).
- Emotional brain circuits light up when contemplating direct harm (pushing the man).
- "You've got one part of the brain that says, 'huh, five lives versus one life...'" – Josh Green [13:02]
- Green posits "inner warring tribes" in our brains: logical vs. emotional responses battling for dominance.
4. Deeper Moral Quandaries – The Crying Baby Dilemma
[17:01] A more fraught scenario: Should you smother your own crying baby to save fellow villagers from enemy soldiers?
- Listeners and test subjects deeply divided; the conflict between evolutionary, emotional instincts and rational calculation is heightened.
- "Inside, the brain was literally divided. Do the calculation. Don't kill the baby." – Jad Abumrad [19:39]
- New brain regions (above the eyebrows) become active during close moral contests, suggesting uniquely human faculties for complex moral reasoning.
5. Real-World Trolley Problems: Programming Self-Driving Cars
[26:43] With autonomous vehicles, these thought experiments become real engineering and policy challenges.
- Example: The "should the car kill the pedestrian(s) or its passenger(s)" dilemma.
- Surveys show that people favor sacrificing one to save many in theory, but would refuse to buy cars programmed to sacrifice them.
Memorable Segment:
- "Would you want to drive in a car that would potentially sacrifice you...? They say, 'No... I wouldn't buy it.'" – Josh Green [28:26]
- Mercedes executive Christoph von Hugo stirs controversy by saying their cars would prioritize the safety of those inside ("If you know you can save one person, at least save that one." – von Hugo [30:29]).
6. Expanding the Problem: Sensors, Data, and Discrimination
[32:15] Raj Rajkumar warns of future dilemmas as car sensors gain more data:
- Cars might distinguish between "a little boy, a little girl, a tall person, a small person, black person, white person" [32:53].
- Raises disturbing possibilities: Can/should cars make choices based on age, health, class, or other attributes?
7. Collective Decision-Making: Who Sets the Rules?
- How do we create standards for these life-and-death decisions?
- Bill Ford Jr.: "Could you imagine if Toyota had one algorithm, General Motors had another? ...We need a national discussion on ethics." [34:58]
- Germany has instituted a code of ethics: Autonomous vehicles may not discriminate between humans on race, gender, or age [35:17].
- Broader calls for international standards—a kind of vehicular Geneva Convention.
8. Emotional Weight vs. Statistical Lives Saved
Reflects on the profound discomfort in delegating "premeditated" life-and-death decisions to algorithms—even if lives are saved overall:
- "There's something dark about a premeditated, expected death. And I don't know what you do about that..." – Robert Krulwich [38:06]
- Human nature may still rebel emotionally, even against rational calculation in policy and engineering.
Notable Quotes & Memorable Moments
-
“The thing that I’ve been pretty obsessed with lately... is automation and artificial intelligence and driverless cars because it’s going to have a larger effect on society than any technology that I think has ever been created in the history of mankind.”
— Nick Bilton [02:05] -
"If you ask people, why is it okay to murder a man with a lever and not okay to do it with your hands? People don't really know."
— Robert Krulwich [07:55] -
“The inner chimp is your unfortunate way of describing an act of deep goodness.”
— Jad Abumrad [15:23] -
“We do not think that any programmer should be given this major burden of deciding who survives and who gets killed. I think these are very fundamental, deep issues that society has to decide at large.”
— Raj Rajkumar [33:54] -
"Germany... has tackled this head on... self-driving cars are forbidden to discriminate between humans in almost any way.”
— Jad Abumrad [35:17] -
“There’s something dark about a premeditated expected death... One is operatic and seems like the forces of destiny, and the other seems mechanical and pre-thought through.”
— Robert Krulwich [38:05]
Important Timestamps
- [02:05] Automation and societal impact (Nick Bilton)
- [05:35–08:00] Trolley problem scenarios and public reactions
- [09:12–15:00] Josh Green on brain imaging and moral reasoning
- [17:01–19:39] The "Crying Baby" dilemma and its emotional resonance
- [26:43] The trolley problem becomes real with self-driving cars
- [28:26] People reject being sacrificed by their own car (Josh Green)
- [30:29] Mercedes executive’s controversial remarks at Paris Motor Show
- [32:53–33:54] The risk of algorithmic discrimination in future vehicles
- [34:49–35:17] Industry calls for unified standards; Germany’s code of ethics
- [38:05–38:58] Emotional and philosophical challenge of algorithmic death
Conclusion
"Driverless Dilemma" expertly weaves philosophy, neuroscience, and cutting-edge technology, showing that as self-driving cars become real, so does the ancient trolley problem. The episode raises profound questions: How do we program morality into machines? Can society agree on a code? Will our emotional instincts ever align with cool utilitarian calculation? Radiolab leaves listeners with the unsettling sense that our oldest ethical conundrums are about to crash into our everyday streets—and the road ahead is unmapped.
