Everyday AI Podcast â Who Gets Written Out of the AI Future?
Episode Date: December 30, 2025 Host: Jordan Wilson (B) Guest: Bridget Todd (A), Podcast Host at Mozilla Foundation & iHeartRadio
Episode Overview
This episode centers on the critical question: Who gets written out of the AI future? Host Jordan Wilson welcomes Bridget Toddârenowned for her work on technology and identityâto unpack the unseen biases and structural obstacles in AI. Together, they discuss whose stories are being amplified or forgotten in this new technological era, the role of human agency and responsibility in AI, and how to foster a more inclusive, equitable AI ecosystem.
Key Discussion Points & Insights
1. The Marginalization Problem in AI (03:37â04:15)
- Bridget Todd highlights that historically marginalized groupsâincluding people of color, women, queer and trans folks, older adults, youth, and working-class populationsâremain sidelined in technology conversations, and this exclusion is reflected within AI systems.
- Quote: "We're not being reflected as meaningfully as we should be." (A, 03:37)
2. Over-Reliance on AI & Its Implications (04:15â05:51)
- The hosts warn against uncritical reliance on large language models, which can mirror and perpetuate society's worst biases.
- Bridget: AI, being built by humans, inevitably inherits human blind spots and foibles. "The danger is that those same pitfalls are just reflected back at us through this powerful technology..." (A, 04:50)
3. Who is Responsible for Bias in AI? (05:51â07:33)
- Responsibility for addressing AI bias spans from developers and companies to individual users and leaders. "I think there's... a lot of people who can have a hand in being the solution as part of that." (A, 06:32)
- The importance of amplifying voices challenging the status quo is stressed.
4. The Dangers of 'AI Slop' and Passive Content Creation (07:35â09:10)
- Jordan brings up "AI slop"âcontent generated at scale and barely touched by humans.
- Bridget: The foundational issue isnât just technical; itâs about the conditions that allow (or prevent) diverse voices from showing up online.
- Quote: "If everybody cannot show up equally to make their voices heard online, we're already limiting the conversation that LLMs are going to be spitting back at us..." (A, 08:25)
5. AI Moderation Bias and Real-World Obstacles (09:10â10:07)
- AI-based moderation tools on social platforms are often not culturally competent, unfairly impacting marginalized communities.
6. Personal Examples of AI Bias (11:03â12:25)
- Bridget cites Canvaâs AI tools, which once deemed Black natural hairstyles "inappropriate," excluding Black women from representation.
- Jordan notes that image generators like Midjourney defaulted to white, older males for prompts like "CEO."
- Quote: "If I can't go onto Canva and say, generate an image of a black woman with Bantu knots... I don't exist as it pertains to Canva's AI." (A, 11:03)
7. Human in the Loop: Maintaining Creativity and Humanity (13:10â14:31)
- Concern around agentic AI: humans must remain central in creative and business processes.
- Bridget: Emphasizes that AI should be a tool for humans, not a replacement for human connection, trust, and community.
8. Personalization Pitfalls: Echo Chambers in AI (17:35â20:27)
- As AI models personalize outputs, thereâs a risk of reinforcing usersâ existing biases, creating âAI echo chambers.â
- Both urge users to challenge themselves and avoid falling in love with their âAI in the mirror.â
- Quote: "...it's just mimicking how I sound. ... Like the myth of the person falling in love with the mirror or the reflection, that's what I'm doing. And, like, that's not good writing." (A, 18:44)
9. Combating Algorithmic Echo Chambers (20:27â22:20)
- The importance of diverse perspectives is underscored. Bridget, a self-proclaimed tech optimist, makes a point to listen to critical voices to avoid becoming "stuck in the AI echo chamber."
- Quote: "Really making sure that you've got a healthy, robust diet of folks in the conversation that don't always look like the people that we think of being amplified as leaders..." (A, 21:05)
10. Actionable Takeaways for Business Leaders (23:00â24:53)
- Trust in media is eroding due to poorly generated AI content. Thoughtful, trustworthy human-generated content will be more valuable than ever.
- Bridget: "[People] who can make good, thoughtful, trustworthy content are going to be at a premium." (A, 23:55)
11. Final Advice: Redefine Who Leads in AI (25:14â26:19)
- Challenge preconceptions of whoâs a leader or key voice in tech and AI.
- Bridget shares a story from Mozfest Barcelona, where activists are using AI for "inverse surveillance" to hold power accountable, demonstrating creative, community-led uses of the technology.
- Quote: "Really challenging what we think about when it comes to who is a leader in technology and AI and how they are using that technology to really shake things up and change the conversation." (A, 25:14)
Memorable Quotes
- "[AI] is built and trained and designed by all of us humans. So all of the blind spots and foibles and biases that we already know humans have ... the danger is that those same pitfalls are just reflected back at us through this powerful technology via AI." (A, 04:50)
- "If it wasn't worth your time to create this as a human, why is it worth my time as a human to listen to it, or read it, or to engage with it?" (A, 16:21)
- "We should all be, ... using [AI] in a way where it's not just setting us up to fall in love with our own voice." (A, 18:44)
- "Who gets written out of the AI future? It's not just about who's on the inside, but also, are we allowing diverse enough voices to be written in?" (Paraphrased, B, throughout)
Notable Timestamps
- 03:37 â Who is currently at risk of being excluded from AIâs narrative?
- 11:03 â Bridgetâs Canva example and Jordanâs Midjourney study on bias.
- 14:31 â The essential role of humans in creative and professional processes.
- 18:44 â On personalization, echo chambers, and falling in love with AIâs reflection of oneself.
- 21:05 â How to seek out a diversity of voices in AI conversations.
- 25:14 â Final advice: Redefining leadership and inclusion in the AI era.
Conclusion
This episode provides a deep, nuanced examination of how AI can inadvertently perpetuate exclusion and what can be doneâat both individual and systemic levelsâto foster a more inclusive AI future. Bridget Todd emphasizes the need for critical engagement, conscious consumption, and elevation of underrepresented voices. The episode closes with a challenge to listeners: actively redefine who leads, who creates, and whose stories populate the AI-driven digital landscape.
