Podcast Summary: On the Media – "Biased Algorithms, Biased World"
Introduction
In the September 1, 2021, episode of On the Media titled "Biased Algorithms, Biased World," hosts Brooke Gladstone and Micah Loewinger delve into the pervasive influence of algorithms in modern society. They explore how these mathematical constructs, often perceived as neutral tools, can perpetuate and exacerbate social inequalities. Central to the discussion is Cathy O'Neill, mathematician, data scientist, and author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
Meet Cathy O'Neill
Cathy O'Neill brings a wealth of expertise to the conversation. As the founder of the consulting firm ORCAA, which audits algorithms for biases related to race, gender, and economic status, she provides critical insights into the hidden dangers of unchecked algorithmic applications. O'Neill's background as a former Wall Street quant and her pivot to investigating the societal impacts of algorithms following the 2008 financial meltdown underscore her commitment to uncovering the flaws in data-driven decision-making.
Understanding Algorithms
Brooke Gladstone initiates the discussion by asking O'Neill to define what an algorithm is. O'Neill clarifies:
“It's just a set of directions. Long division that you learn in fourth grade is an algorithm. I use the word algorithm. It's short for predictive algorithm, and that's a way of predicting the future based on the past.” (02:18)
She emphasizes that algorithms rely heavily on "training data," which comprises historical information used to forecast outcomes such as loan approvals, hiring decisions, and even criminal sentencing.
Weapons of Math Destruction (WMDs)
O'Neill introduces the concept of Weapons of Math Destruction (WMDs), distinguishing them from benign algorithms. She outlines three defining characteristics of WMDs:
- Opacity: Their inner workings are often hidden from public view.
- Scale: They affect a large number of people.
- Damage: They have destructive impacts, particularly on marginalized groups.
“Most algorithms are totally benign... Algorithms that are important, that are destructive and that are secret. That's the weapon of mass destruction.” (02:54)
Examples of WMDs
-
Predictive Policing:
O'Neill discusses how predictive policing algorithms use arrest data as a proxy for actual criminal activity. This approach is flawed because it disproportionately targets neighborhoods with higher police presence, often overburdening communities of color.
“When we use arrests as a proxy for crime, we are really overburdening those people who are already profiled by the police.” (04:19)
-
Hiring Algorithms:
In the realm of employment, algorithms assess candidates based on historical data that may embed biases related to promotions, raises, and employee retention.
“Implicit bias that we know exists... gets baked into the algorithm that I just wrote.” (07:40)
-
Facial Recognition Technology:
O'Neill highlights the technical shortcomings of facial recognition systems, particularly their lower accuracy rates for women and people of color due to biased training datasets.
“They weren't thinking carefully enough before deploying it to the world to say, hey, does this work? As well on black faces as white faces.” (09:02)
Impact on Society
The discussion underscores how biased algorithms can entrench systemic inequalities. By relying on flawed proxies and opaque processes, these algorithms often disadvantage already marginalized groups, reinforcing existing societal disparities.
Case Studies
-
Clopenings in Labor Scheduling:
O'Neill introduces the concept of "clopenings," where workers experience unpredictable schedules that prevent them from qualifying for benefits or maintaining a stable work-life balance. She describes how scheduling algorithms can be manipulated to minimize benefits eligibility at the expense of employee well-being.
“It's a small benefit for the employer... To make sure that none of your employees get enough hours a week to qualify for benefits... Their life [is wrecked].” (11:13)
-
Starbucks Scheduling Algorithm:
Despite promises to eliminate clopenings, Starbucks' scheduling practices continue to prioritize efficiency over fairness, demonstrating the difficulties in altering algorithm-driven systems even when ethical concerns are raised.
“The incentives to managers to be efficient were so irresistible that they never actually made any changes.” (12:54)
-
U.S. News & World Report College Rankings:
O'Neill critiques how college ranking algorithms incentivize institutions to "game" the system, focusing on metrics that may not align with educational quality or affordability. This creates a feedback loop where colleges adjust their practices to meet ranking criteria rather than actual educational needs.
“Colleges know that if they look exclusive, then they look better for the ranking. They just get a bunch of kids they know will never make it to apply.” (14:28)
The Problem with Training Data and Proxies
A recurring theme in the conversation is the reliance on historical data and proxies that may inherently contain biases. Whether predicting criminal behavior based on arrest records or hiring based on past employee success metrics, the choice of data inputs can perpetuate existing inequalities.
“Algorithms that predict behavior... Not on what you do or what you've done... but who you are.” (08:25)
Lack of Standards and Accountability
O'Neill emphasizes the absence of standardized regulations governing algorithmic transparency and fairness. Proprietary algorithms remain black boxes, leaving users and those affected by them without recourse or understanding.
“Nobody... there is no standard. A large company says, oh, we don't want to build these algorithms, we want to rent them... We don't have to explain it at all.” (10:25)
Efficiency vs. Fairness
The episode raises a critical philosophical question: Should society prioritize efficiency over fairness? O'Neill argues that current algorithmic applications are skewed towards profitability, often at the expense of human well-being and equity.
“Do we have any answer to capitalism?... Algorithms... about profitability, not about happiness.” (13:16)
Public Perception and Control
O'Neill advocates for increased public skepticism and agency regarding algorithmic decisions. She urges individuals to question and understand the algorithms that impact their lives, highlighting that societal control over these tools is paramount.
“I want us to learn to be skeptical... The power that we give to the algorithms is the thing we have the most control over.” (13:40)
Key Takeaways
- Algorithms Are Not Neutral: They reflect the biases present in their training data and the intentions of their creators.
- WMDs Pose Significant Risks: Opacity, scale, and destructive impacts make certain algorithms particularly harmful.
- Need for Transparency and Regulation: Establishing standards for algorithmic fairness and accountability is crucial.
- Redefining Success: Society must critically assess and redefine what constitutes success, ensuring it aligns with human values rather than corporate profitability.
- Empowerment Through Skepticism: Encouraging public awareness and questioning of algorithmic decisions can help mitigate their negative impacts.
Conclusion
The "Biased Algorithms, Biased World" episode of On the Media provides a compelling examination of how algorithms, while seemingly impartial, can perpetuate and even amplify social injustices. Cathy O'Neill's insights urge listeners to critically evaluate the role of data-driven decision-making in society and advocate for greater transparency and fairness in algorithmic applications.
