Slate Money Podcast – Weapons of Math Destruction Edition
Date: September 10, 2016
Host: Felix Salmon
Guests: Cathy O’Neil, Jordan Weissman
Main Topic: The impact and dangers of algorithms as detailed in Cathy O’Neil’s book Weapons of Math Destruction
Episode Overview
This special edition of Slate Money is dedicated to Cathy O’Neil’s then newly released book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. The hosts break down what makes an algorithm destructive, discuss real-world examples, and debate possible solutions to the dangers posed by opaque, unaccountable models. Throughout, O'Neil makes the case that too many algorithms—especially those that are secret and high-impact—are exacerbating inequality and reducing fairness, rather than advancing progress.
Key Discussion Points & Insights
1. Defining Algorithms and “Weapons of Math Destruction”
[04:20–08:00]
-
Algorithm Basics:
- Algorithms need two things: Data (training data to find patterns) and a definition of success (“objective function”).
- Cathy O’Neil: “What the algorithm does is it works, looks for patterns in the history that you’re giving it in the data for when this definition of success actually occurs.” [04:35]
-
What Makes a WMD?
O’Neil outlines a three-part test:- Widespread and High Impact: Used to make significant decisions affecting many lives.
- Secrecy & Unaccountability: The scoring process is opaque—people don’t know how or even whether they’re being scored, and there’s no appeals process.
- Destructive Feedback Loops: The model not only harms individuals but worsens the overarching problem it aims to fix.
- Cathy O’Neil: “…[algorithms] are engendering these sort of negative feedback loops that actually undermine the original goals, which are often good goals, good intentions. But…they’re actually making the problems worse.” [07:52]
2. Example of a “Bad Algorithm”: Teacher Value-Added Models
[08:18–17:00]
-
Flawed Models in Education:
- Teacher “value-added” models measure improvement in student test scores year over year and attribute this difference to the teacher.
- These are highly stochastic—errors from two statistical guesses are compounded.
- Jordan Weissman: “…when you've got two numbers and looking at the difference, each of those numbers has some randomness and variability and uncertainty… it just compounds all of that.” [10:26]
- Cathy O’Neil: “...the idea that we’re going to sort of hold each teacher accountable for this sort of average error term for their class is ridiculous. Especially because you typically only have about 20 kids in the class.” [10:53]
-
Negative Side Effects:
- Incentivizes cheating and harms innocent teachers (Washington D.C. example under Michelle Rhee).
- Good teachers flee systems with these capricious models—leads to poorer education for disadvantaged schools.
- Felix Salmon: “...all you're doing is you're feeling that you're living in this kind of incredibly unfair lottery system.” [14:43]
-
Lack of Self-Correction:
- Unlike baseball statistics, these models don't learn or improve; no feedback loop.
- Jordan Weissman: “[Sports teams] can look and see, okay, there are errors. Let's reincorporate that and improve. [...] You talk about how that doesn't exist in the value-added model.” [15:32]
- Unlike baseball statistics, these models don't learn or improve; no feedback loop.
3. Algorithms with Harmful Side Effects: Recidivism Risk Models
[17:00–26:12]
-
Recidivism Algorithms in Criminal Justice:
- Used by judges in over half the U.S. states for bail, parole, and sentencing decisions.
- These models often reinforce racial and economic bias because historical data reflects uneven, biased policing.
- Cathy O’Neil: “...proxies for race and class…there’s just like the data itself going into these algorithms…if we think...it is uneven, unfair, biased and racist, then these algorithms are just going to further those biases.” [23:27]
-
Misuse vs. Potential for Good:
- The same models could be used to offer help/interventions instead of harsher sentences.
- O’Neil: “If they were used as ways of finding an intervention that actually reduced recidivism…then it wouldn’t be a weapon of math destruction…” [22:06]
- The same models could be used to offer help/interventions instead of harsher sentences.
-
Ethical Tradeoff: Efficiency vs. Fairness
- Jordan Weissman: “Are we willing to sacrifice some efficiency for fairness?...this is not something that’s foreign to American thinking.” [25:26]
-
Opacity and Accountability:
- Data and algorithms are proprietary—researchers and the public can’t audit them.
- Felix Salmon: “...the proprietary nature of these algorithms…People look at them and say, why is this secret? And there's no good answer.” [27:18]
- Data and algorithms are proprietary—researchers and the public can’t audit them.
4. Solutions: Regulation, Professional Ethics, and the European Model
[27:46–34:42]
-
Professional Ethics:
- Could data scientists adopt a version of the Hippocratic Oath? O’Neil argues self-regulation is not enough; companies face profit motives that conflict with fairness.
-
Regulation and Oversight:
- The European approach: strict rules on data reuse/profiling (GDPR-style).
- Jordan Weissman: “...the regulatory state saying you cannot take people's data and then resell it without their permission...That seemed like the most elegant solution.” [29:45]
- O’Neil agrees it’s powerful but politically challenging and notes it wouldn’t fix all issues (e.g., teacher and sentencing algorithms).
- The European approach: strict rules on data reuse/profiling (GDPR-style).
-
Call for a New Regulator:
- O’Neil proposes a regulatory body to oversee high-impact, high-secrecy algorithms — especially those affecting large populations and where harm is possible.
- Cathy O'Neil: “...when it becomes widespread and high impact, then it can't be secret and potentially destructive…there has to be at least a regulatory body that can look into it to make sure it’s not discriminated.” [31:07]
- O’Neil proposes a regulatory body to oversee high-impact, high-secrecy algorithms — especially those affecting large populations and where harm is possible.
-
Due Process Concerns:
- Could denial of algorithmic transparency itself violate constitutional rights (due process)?
- Jordan Weissman: “If an algorithm is opaque...you're not getting due process of law.” [31:44]
- Could denial of algorithmic transparency itself violate constitutional rights (due process)?
-
Current State of U.S. Protection:
- The Consumer Financial Protection Bureau (CFPB) is attempting to analyze disparate impacts, especially in loans.
- Felix Salmon: “[CFPB is] doing quite a good job at trying to find what's known as disparate impact...where you have an algorithm which has a disproportionate impact on black people...” [34:02]
- The Consumer Financial Protection Bureau (CFPB) is attempting to analyze disparate impacts, especially in loans.
5. Broader Societal Impact and the Digital Underclass
[34:12–35:29]
- Algorithms exacerbate inequality:
- They often hurt the poor and people of color, while the affluent rarely encounter their worst effects.
- Cathy O’Neil: “...well-off people do not even have to look at these WMDs.” [35:13]
- The commodification of user data and profiling targets vulnerable groups for predatory services.
- They often hurt the poor and people of color, while the affluent rarely encounter their worst effects.
Notable Quotes
- Cathy O’Neil: “So often you just hear people sort of vaguely talking about, you know, ills that could befall humanity, and I’m like, what are we worried about? Can we define it? Can we triage, can we carve it out?” [03:31]
- Felix Salmon: “The broad effect is that the good teachers wanting to get out from under this sword all wind up going to the good school systems where they don’t need to worry.” [14:43]
- Jordan Weissman: “Are we willing to sacrifice some efficiency for fairness? ...Maybe the most important policy question.” [25:26]
- Cathy O’Neil: “What I'd object to is the way these models are used to destroy people's lives.” [22:06]
- Felix Salmon: “...the book, the main worry...is that while rich and affluent people might actually be better off from a lot of these algorithms, it's...the poor, the people of color who really get hurt and who are largely voiceless and have no ability to fight back against these models which are punishing them.” [34:12]
- Cathy O’Neil: “It will never happen that all algorithms have suddenly become transparent. ...But what we may be able to say is, like, when it becomes widespread and high impact, then it can’t be secret...“ [31:07]
Numbers Round (Memorable Moments)
[35:31–end]
- $185 million: Penalty Wells Fargo pays for fake accounts scandal — an example of how measurement and incentive systems can go awry. [35:43–37:47]
- $182 million: Facebook’s annual Norwegian ad revenue, compared to the controversy over Facebook’s content policies and international influence. [38:46–40:46]
- 16: Percentage that black-sounding names are more likely to be rejected by Airbnb hosts — a stark example of algorithm-enabled, or at least amplified, discrimination. [41:01–41:29]
- 99: The book’s peak ranking on Amazon’s overall bestseller chart—testament to the impact and timeliness of these issues. [42:23]
Episode Flow & Tone
The episode is casual, nerdy, and often humorous, but remains focused on unpacking complex ethical and technical dilemmas. Felix Salmon keeps the flow lively, O’Neil brings technical rigor with real-world urgency, and Weissman provides relatable audience reactions and policy perspective. The group stresses empathy for those adversely affected, and a sense of urgency to address these silent but pervasive forces.
Timestamps for Key Segments
- 04:20–08:00: Definitions—What is an algorithm? What makes one a WMD?
- 08:18–17:00: Teacher value-added models—bad algorithm deep dive.
- 17:00–26:12: Recidivism risk models—harmful effects of algorithms that “work.”
- 27:46–34:42: Regulatory solutions; ethical data science; European approaches.
- 35:31–end: Numbers round—recent news stories as seen through the WMD lens.
Conclusion
The episode makes a compelling case for treating algorithmic decision-making as a major social and civil rights issue. In O’Neil’s words and analysis, the hosts show how bad or misapplied data science can reinforce and deepen social inequities, while outlining what it might mean for data science to truly serve the public good. The conversation ties together technical, ethical, and political threads—offering listeners not only a warning, but a framework for policy and resistance.
