WavePod Logo

wavePod

← Back to Assembly Required with Stacey Abrams
Podcast cover

Do You See What I See? Building AI for All of Us

Assembly Required with Stacey Abrams

Published: Thu Dec 26 2024

In the face of unbridled AI development and incoming President Trump’s close advisors who happen to be big investors in AI, it’s more important than ever to raise the alarm about areas of concern. Stacey Abrams speaks to Joy Buolamwini, the AI researcher and artist who brought to national attention the way bias is coded into artificial intelligence, particularly in facial recognition technology – what Buolamwini coined the “coded gaze.” They discuss what we should know about the pitfalls and potentials of AI today, and Buolamwini invites listeners to join the ongoing mission of the Algorithmic Justice League to raise awareness about the impact of AI and how we can all contribute to a more equitable use of the technology.

Summary

Assembly Required with Stacey Abrams: Episode Summary

Episode Title: Do You See What I See? Building AI for All of Us
Release Date: December 26, 2024
Host: Stacey Abrams
Guest: Dr. Joy Buolamwini
Produced by: Crooked Media


Introduction: The AI Landscape and Its Challenges

In the premiere episode of Assembly Required with Stacey Abrams, host Stacey Abrams delves into the burgeoning field of artificial intelligence (AI), emphasizing both its transformative potential and the pressing need for comprehensive regulation. Abrams highlights a critical concern: despite AI's rapid advancements, meaningful governmental oversight remains elusive. She underscores the gravity of the situation by referencing a 2023 letter signed by 350 industry leaders, including OpenAI’s CEO, which warns that "mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war" (00:39).

Abrams expresses apprehension about the incoming Trump administration's stance on AI, pointing out potential conflicts of interest among key figures like Investor David Sachs, who profits from unregulated AI development. This sets the stage for a deeper exploration of AI's societal implications.

Introducing Dr. Joy Buolamwini: The Voice Against AI Bias

Abrams introduces Dr. Joy Buolamwini, a renowned poet, researcher, and computer scientist, celebrated for her groundbreaking work on AI bias. Recalling Dr. Buolamwini’s pivotal 2016 TED talk, Abrams emphasizes how her research exposed fundamental flaws in facial recognition technology. Buolamwini coined the term “the coded gaze” to describe how AI systems reflect the biases of their creators, leading to racial and gender discrimination (02:52).

Data is Destiny: Understanding AI's Foundations

A central theme of the episode is the concept of “data is destiny,” articulated by Dr. Buolamwini (05:16). She explains that AI systems learn from existing data sets, which often contain historical biases. "If we have data that isn't reflective of the world, or if we have data that's actually entrenching inequalities, those are the patterns AI systems are destined to learn and then to reproduce and then to amplify" (05:53).

Dr. Buolamwini recounts her personal experience at MIT, where she discovered that facial recognition software failed to detect her dark-skinned face unless she wore a white mask. This stark revelation prompted her to investigate further, leading to the identification of significant biases in AI technologies developed predominantly by non-diverse teams (12:44).

The Coded Gaze and Power Shadows

Delving deeper, Dr. Buolamwini explains “the coded gaze” as a manifestation of who holds the power to shape technology. This concept extends to the “power shadows” within data sets—where the overrepresentation of certain groups (e.g., lighter-skinned males) skews AI training data, perpetuating a narrow and biased perspective of leadership and success (08:22).

She illustrates how data sets, often scraped from images of public officials, predominantly feature men and lighter-skinned individuals, reinforcing patriarchal norms and marginalizing diverse identities. This imbalance not only affects AI accuracy but also influences societal perceptions of leadership and competence (13:03).

Algorithmic Justice League: Advocating for Equitable AI

Abrams and Dr. Buolamwini discuss the founding of the Algorithmic Justice League (AJL), an organization dedicated to combating AI bias and promoting equitable technology. Dr. Buolamwini shares the origins of AJL, emphasizing the necessity of a multidisciplinary approach that includes researchers, artists, activists, and storytellers. "If you have a face, you have a place in the conversation about AI" (19:16).

She highlights the importance of algorithmic audits, which assess AI systems' biases by testing them against inclusive data sets. One notable project, Gender Shades, revealed troubling disparities in gender and skin-type accuracy among AI models from IBM, Microsoft, and Amazon, with error rates soaring over 40% for darker-skinned women compared to near-perfect accuracy for lighter-skinned males (21:41).

Real-World Implications: From Airports to Employment

The conversation shifts to the tangible impacts of biased AI. Dr. Buolamwini recounts her personal harrowing experience with TSA’s facial recognition technology, where her hair was scrutinized to an invasive degree (32:41). This incident underscores the broader issues of privacy, consent, and the potential for AI to perpetuate surveillance and discrimination.

Dr. Buolamwini elaborates on the pervasive use of AI beyond public-facing applications. She cites examples like algorithms in healthcare determining kidney transplants and in HR filtering out resumes based on biased criteria. These systems often operate invisibly, making their biases even more insidious and harder to challenge (29:12).

Navigating Ethical AI: Balancing Benefits and Risks

Abrams and Dr. Buolamwini explore the dual nature of AI—its capacity to solve critical problems versus its potential for misuse. Dr. Buolamwini advocates for “inclusive AI”, not in the superficial sense of mere representation, but in granting agency and consent to individuals whose data is used. She warns against “ethical washing”, where companies claim ethical practices without substantive accountability measures. "Accountability means not just saying we did our best, but also taking steps to address and rectify issues when things go wrong" (38:18).

Policy and Governance: Learning from Global Frameworks

The discussion touches on legislative efforts to regulate AI. Dr. Buolamwini praises the European Union’s AI Act as a comprehensive framework that establishes clear guardrails and a risk-based approach to AI governance. She contrasts this with the more fragmented and nascent efforts in the United States, highlighting the AI Bill of Rights as a critical blueprint that emphasizes safe AI systems, transparency, and meaningful alternatives (41:41).

Call to Action: Empowering Listeners to Advocate for Equitable AI

In the episode’s final segment, Abrams and Dr. Buolamwini provide listeners with actionable steps to combat AI bias and promote accountability:

  1. Share Your Story: Personal experiences with AI bias can build an evidentiary record that counters gaslighting and raises awareness. Dr. Buolamwini urges listeners to report instances of AI discrimination at AJL's reporting portal (43:58).

  2. Educate Your Community: Informing others about AI biases and ethical considerations fosters collective action. Dr. Buolamwini highlights the importance of accessible educational resources like her book, Unmasking AI, and the documentary Coded Bias (43:58).

  3. Take Action: Engage with organizations like the Algorithmic Justice League by sharing your stories, donating, or participating in campaigns such as Freedom Flyers. Abrams encourages listeners to spread the word on social media and join the movement for equitable AI (43:58).

Toolkit for Listeners: Practical Resources and Further Learning

Abrams concludes the episode with a Toolkit, offering practical resources to help listeners get involved:

  • Watch Coded Bias: A documentary that provides an in-depth look at AI biases and their societal impacts.
  • Read Unmasking AI: Dr. Buolamwini’s book explores her mission to protect human agency in a machine-driven world.
  • Visit AJL’s Take Action Page: Discover various ways to support and participate in the fight for algorithmic justice.
  • Follow the #FreedomFlyers Campaign: Learn about your rights concerning facial recognition at airports and share your experiences.

Conclusion: A Call for Collective Responsibility

Assembly Required with Stacey Abrams effectively frames AI as a double-edged sword—capable of immense good but fraught with ethical pitfalls if left unchecked. Through an enlightening conversation with Dr. Joy Buolamwini, the episode underscores the necessity of equitable and accountable AI development. Stacey Abrams empowers listeners to take proactive steps in advocating for technologies that serve all communities, ensuring that AI advancements enhance rather than hinder societal progress.


Notable Quotes:

  • Stacey Abrams ([00:39]): "They’re not [the bad guys] just hoping we stop fighting."
  • Dr. Joy Buolamwini ([05:53]): "Data is destiny. If we have data that isn't reflective of the world, or if we have data that's actually entrenching inequalities, those are the patterns AI systems are destined to learn and then to reproduce and then to amplify."
  • Dr. Joy Buolamwini ([19:16]): "If you have a face, you have a place in the conversation about AI."
  • Dr. Joy Buolamwini ([38:18]): "Accountability means not just saying we did our best, but also taking steps to address and rectify issues when things go wrong."

Resources Mentioned:

  • Algorithmic Justice League (AJL): ajl.org
  • Report Portal: report.ajl.org
  • Dr. Joy Buolamwini’s Book: Unmasking AI: My Mission to Protect What Is Human in a World of Machines (available paperback November 19th)
  • Documentary: Coded Bias on Netflix

For more insights and to stay updated on future episodes, listeners are encouraged to visit Assembly Required’s contact page or leave a voicemail at 213-293-9509.

No transcript available.