
Loading summary
A
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
B
It's getting to the point where a lot of people are using AI for everything, helping them think and plan, helping them strategize and really in their personal and professional lives. I think AI has gotten to the point now where people maybe become overly reliant on it. And yeah, you might say that's an okay thing because, you know, today's large language models are very capable and actually very smart. But have you thought about what comes out of these models and where it's actually originating from? Right. I've talked about probably dozens of times on this show. Large language models are not perfect. In fact, they're often very bad because they can reflect bad parts of society. Sexism, racism, and really just mirroring some of the worst that there is on the Internet. So if we as a society are just blindly copying and pasting anything that comes out of a large language model, whose story are we not telling and who might be getting written out of the AI future? I don't have the answers, but our guest today is a lot more experienced in this area than me and I'm excited to talk about it. All right, let's get into it. If you're new here, welcome. My name's Jordan. This is Everyday AI. This is your daily live stream podcast and free daily newsletter helping everyday business leaders like you and me not just keep up with everything that's happening in the world of AI because it is non stop, but how we can make sense of it, right? Extract insights that matter and grow our companies and our career. So if that's what you're trying to do, awesome starts here with the unedited, unscripted live stream podcast. But if you want to take it to the next level, make sure you go to our website at your everyday AI dot com. There we're going to be recapping the highlights from today's show as well as all of the other AI news that you need to know. Enough of me chit chatting. I'm excited to talk to today's guest. So live stream audience, if you could please help me. Welcome to the show Bridget Todd, who is the podcast host at the Mozilla Foundation. Bridget, thank you so much for joining the Everyday AI Show.
A
Thank you so much for having me. I am so excited to be here.
B
All right, yeah, me as well. All right, tell everyone a little bit about your background because you do a Lot on the, on the tech scene, on the podcasting scene. So, yeah, tell everyone a little bit about your background.
A
Yeah. So I am the host of a couple of different podcasts about the intersection of technology and identity. One is one that I make with the Mozilla foundation called irl. It really is an examination of who has the Power in AI, who is using AI and technology to interrogate power. I also host iHeartRadio's Tech and Culture podcast called There Are no Girls on the Internet. Kind of about the same thing, but from a little bit different approach. All about the intersection of identity, social media technology, and how it shows up in all of our lives.
B
All right. Yeah. If you are a podcast fan listening. Well, you probably are because you're listening to us talk. Make sure to go check out Bridget's shows. They're great. So maybe let's just start at the end here. Bridget and I know this is maybe one of those episodes where there's no right and wrong answers. Right. We might get into more philosophical question discussions here, but ultimately, right now, who is getting written out of the AI future?
A
Oh, my biggest concern is that it's really people who are traditionally marginalized in all conversations about technology. Right. One of the kind of foundational ways that I think about this is obviously identity. So racialized folks, people of color, women, but it's also queer folks, trans folks. It's also older folks, youth, working class people. I really want to make sure that all of us, everybody, is able to be included in these conversations that are so impactful to all of our lives. And so the same way that these people are often pushed to the sidelines in conversation about technology more broadly, the same thing is happening in AI. We're not being reflected as meaningly as we should be.
B
And so what does that ultimately lead to?
A
Right.
B
In. In reality. Right. And I kind of touched on that in, in, in the opening of today's show. Right. And I do think now more than ever, people are using AI everywhere. You know, are there dangers maybe to using AI a little bit too much specifically, you know, for some of those reasons, because, you know, not all large language models are going to be maybe properly reflecting the goods in society.
A
Yeah, I love this question. It's something that comes up a lot on the podcast that I make. You know, it's very easy to talk about things like AI as this sort of all knowing computer brain that knows everything. It's all powerful, all seeing. But the reality is that AI is built and trained and designed by all of us humans. Right. And so all of the blind spots and foibles and biases that we already know humans have and can reflect, I don't think I'm telling anybody listening what they don't already know, that the danger is that those same pitfalls are just reflected back at us through this powerful technology via AI. And so I think it really has to be a conversation of, like, understanding and then also interrogating what that actually means with humanity. If the same biases that we all walk around with every day are just being reflected back at us using this technology, it really behooves us to think about, like, how is this, how are we going to use this technology? How is that reality going to shape what role we allow it to play in our everyday work lives?
B
And this one might be a loaded question as well because, you know, we get into things like, know, training data and, and reinforcement learning with human feedback and some of the more technical sides of AI. But who's maybe ultimately the person or entity or company responsible, Right. Is it people who are making decisions on training data for large language models? Is it the people, the companies. Right. Who are training these models? Is it the. The companies and the teams who are using them. Right. Who ultimately might have to play a more pivotal role in order to, you know, have outputs ultimately that are more reflective?
A
Oh, what a good question. I think there's. I mean, it's kind of a good, bad thing, Right. Like, I would say all of us, everyone, everybody that you just named, every entity, every company that you're thinking of, I think bears some responsibility here. But that's also kind of a good thing because there's a lot of people who can, who can have a hand in being the solution as part of that. Right. Like, I really think that, you know, I have. I don't have a direct line to Sam Altman or anybody at ChatGPT or OpenAI, but I know that I can listen to folks who are sort of changing the conversation around AI and amplify those folks as leaders in my own mind. And so I think it really can start with all of us making a sort of individual choice to kind of think differently when we think about AI and the conversation that we have about it. Right. Think differently about who it is that we. That we think of as experts and leaders in these conversations. I think it can really. It sounds kind of duh and, you know, basic, but I think starting there can be a really good way to just like, start that conversation and start that change.
B
Yeah, I think, I think you bring up A good point.
A
Right.
B
There's no, you know, one party or, you know, one, you know, piece in this that's maybe more responsible than others. But, you know, know, I'm wondering today, right, because, you know, improving training data, you know, or changing how Frontier Labs maybe make models is probably a much bigger and longer process than we might think. But in terms of what we use, right, because maybe two, three years ago, right, when Chat GPT first came out, I don't think companies, right, we're just blindly copying and pasting things. But now we have this whole problem with AI slop, right? Everyone's trying to, you know, create content at scale and barely touching it. Maybe. What are some of the dangers in that? Right? And companies maybe being too hands off and just using whatever the model spits out.
A
Yeah. I mean, I think it really comes back to the idea that you and I were talking about, off Mike, where I think the Internet and technology have got to become a less hostile space for people who are traditionally marginalized. Right. If everybody cannot show up equally to make their voices heard online, we're already limiting the conversation that LLMs are going to be spitting back at us when it comes to those, those, those folks. Right. And so I really do think that we have to. You know, it doesn't just start with AI companies. It really does start with what are the conditions that are allowing people to show up or not show up in technology and online? And how is that reflecting this sort of warped and biased view of marginalized people in AI?
B
Is there an answer to that right now? Or are there like so many answers? Like what are some of those, you know, conditions maybe for, you know, myself and, you know, others in our audience that maybe just don't know some of those obstacles and roadblocks?
A
Yeah, I mean, I think one of them is like the things like the use of AI to do moderation on social media platforms. Right. We already know that a lot of that is super biased. It's biased for a lot of reasons. It's, it's, it's not as I, when I have these conversations, I think people assume that I'm saying, oh, a big bad white guy who is rich is making evil decisions because he's an evil person. That's not what I'm saying at all. I typically mean something like, oh, someone is, is deploying AI based moderation tools. Those tools are not always culturally competent and so they're biased against cultures that these tools might not be able to understand and that things like that can relate to marginalized people. Not being able to show up in a way that's equitable online.
B
Yeah. And I think a lot of that just goes back to, you know, again, this isn't a very technical show, but, you know, algorithms, Right. Whether you're talking about social media, whether you're talking about, you know, machine learning, deep learning, some of the technologies that have led to today's large language models, right.
A
There's.
B
There's this. Know they're, they're opaque. Right. Not everyone knows exactly how all of these things work, which maybe makes it a little bit more difficult to do something about it. Right. You know, I'm curious. I've seen, you know, personal examples of this, and I'm happy to share those. But have you, you know, Bridget, do you have any, you know, firsthand accounts when you've been using any AI tool? Right. Whether it's. It's text, photo, video, where you're like, wait, this doesn't seem right. This doesn't seem right. This output doesn't really seem reflective of the society that I know.
A
Oh, my gosh. I mean, I have a good example. This might sound like kind of a weird example, but when Canva was rolling out their AI tools, there was a whole thing where you were unable to ask it to generate black women with black natural hairstyles, because for whatever reason, those hairstyles, hairstyles like the one I'm wearing right now were deemed, like, inappropriate by, by their tool. And so, you know, I think it's. Sometimes it's these situations where I don't think that is. Is. I don't think the reason those kind of things happen are because somebody is. Is, you know, trying to do something bad or nefarious. But it's just one of those things where somebody who is culturally competent did not. Is not the reason why that, that decision got made. Right. And so that kind of decision, it might seem small, but it leads to black women like myself being left out of the conversation when it comes to AI. If I, if, if I can't go onto Canva and say, generate an image of a black woman with Bantu knots natural hairstyle because it's against. Because it, it triggers whatever their. They think is like, against their community guidelines, I, I don't exist as it pertains to canvas AI. And so there are all these very little ways that I think that marginalized people are just being left out and unseen that I think reflect the larger hostility and bias that is sometimes present in our technology landscape more. More broadly.
B
Yeah, I, I think that's A great, a great call out and a great example. I've maybe talked about this once or twice, but you know, we kind of did a very informal, you know, study, I wouldn't even call it that, where you know, we said, hey, give, you know, picture of a CEO, right. To earlier versions of, of mid journey. Right. And almost every single time it was, you know, white, white guy, probably in his 50s. Right. Without fail. You know, so with, with these issues. Right. Clearly, I think anyone that's, that, that's a power user of, of any large language model, you know, has seen their fair share of examples, yet at the same time they've probably overlooked more than they've spotted.
A
Right.
B
Which makes it hard. So, you know, there's always this ongoing conversation about the importance of, of human in the loop.
A
Right.
B
And, and how as you know, large language models become more agentic and you know, multi agentic orchestration and all these things when agents are working with other agents, like what role does the human play in all of this? And how should business leaders be, you know, maybe in a reasonable and responsible way. How should they be auditing what these models spit out?
A
Oh my gosh, what a good question. I always say that, you know, we have to remember that the work that we're putting out, for the most part, I can speak for myself as a creative. I am a human that makes work for other humans. I use tools like AI as tools to help me produce that work. But ultimately remembering that there needs to be humans in the equation at the end of the day or this, like that's why I'm doing this. And so the future that you've just laid out, where it's, you know, AI agents communicating with other AI agents, to me that's really scary because that's, that really, I think leaves out the humanity and the humanness that I think should be at the center of why you do any of this. And so yeah, I would say it really comes back to making sure that we are remembering that this work, the reason why we're doing this work is at the end of the day about humans and our humanity. Yeah.
B
Humanity is such an important discussion. Right. And I always have to remind myself as someone that talks about AI literally every single day, right. Is ultimately, yes, like this. The, all the information that pulls out of these models originated with a human. Right. There was someone's story. Whether it's a boring business story or a story of courage and you know, something triumphant, it's always goes back to a human. And ultimately whatever we create will be Consumed also by humans. Right? You know what? Maybe advice do you have for people to focus on the human when it is AI everywhere? Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on gen AI. Hey, this is Jordan Wilson, host of this very podcast. Companies like Adobe, Microsoft and Nvidia have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for ChatGPT training for thousands or just need help building your front end AI strategy, you can partner with us too. Just like some of the biggest companies in the world do. Go to your everyday AI.com partner to get in contact with our team or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on gen.
A
Do you remember this is going to sound wild, but do you remember that old clothing brand from the 90s, FUBU for us by us? Oh yeah, yeah. That is my sort of orientation in this, that this work is for us, by us and the us, and that is humans. Right. And so I can't tell you how many times I've seen somebody so excited on Reddit or a message board to share something that they, that they wrote using AI or that they built using AI. Like I'm in the notebook LM community, where people are really excited to share their AI generated podcasts. And oftentimes the top comment will be, you know, if it wasn't worth your time to create, create this as a human, why is it worth my time as a human to listen to it or read it or to engage with it? Right. And so I would say real that something about that comment I think really changed my understanding of the role that AI can be playing for me as a creative professional that, you know, the reason why I do this work is about trust and connection and community and those are traits that humans have. Right? And so really making sure that all of the things that we do are grounded in the things that make humans great and not trying to outsource that to AI. Because when you do, your audience is going to know and they're going to be like, oh, this actually wasn't worth your time to make, so I'm not going to spend my 20 minutes listening to it.
B
Yeah, yeah, that's. That's a great. Yeah, I love reflecting back, you know, AI for us, by us. Yeah, well. To see if Damon John can sign off on that. Right. Yeah, yeah, so, but great, like, you bring up another topic that is actually really interesting. Right. So you mentioned, like, Notebook LM and, you know, obviously the. The AI generated podcasts, but I think there's also, with AI in much more now, even than six months ago, it is starting to reflect the person using it more and more. Right. As you know, all of the major models now have released things like, you know, personalization or memory or past chat history to where the responses are maybe even reflect their own preferences and maybe even sometimes their own biases. Right. So throwing that on top of the, you know, issues with. With training data. So even with that in mind, right, should, you know, individuals be going in and, you know, checking their settings and making sure, you know, things in their custom instructions, you know, maybe challenge their own preconceived notions?
A
Yes, yes, yes, yes, yes, yes. I actually had an issue with this myself because, you know, I will use ChatGPT to help me write captions and, like, descriptions and metadata for my podcast. And I caught myself, you know, whatever chatgpt spits back out at me, being like, oh, this is so good. This is. This is phenomenal. I don't even need to edit this. But I've come to realize I think it's good because it's just mimicking how I sound. It's not. It's not. It's not actually, like, challenging me or giving me anything to actually think about. And I'm just like that. Like the myth of the person that's falling in love with the mirror or the reflection, that's what I'm doing. And, like, that's not good writing. Right? That's not. That's not good creative work. Good creative work is challenging. And I guess I just. As someone who talks about AI and creativity a lot, I was surprised how quickly I kind of fell in love with the sound of my own writing being or the sound of my own voice being reflected back at me. And I think I say that to say that we should all be, you know, if we're going to be using this tool that is so powerful, it should really behoove us all to be using it in a way where it's not just setting us up to fall in love with our own voice. Right. That it is challenging us that it is. You know, we are asking it to set us up to be. To be. To give us a little bit more pushback. I found that to be very useful in my own use of AI for my own creative work.
B
Yeah, I think, I think that's a good call out.
A
Right.
B
And yeah, being. Being careful not to fall in love with the. You. The AI in the mirror, so to speak. You know, it. I think it also, Bridget, ties a little bit back to, you know, how social media algorithms over the years, you know, have now just kind of done the same thing.
A
Right.
B
But more getting very deeply ingrained, especially on social media, with just being an echo chamber.
A
Right.
B
And everything maybe you see is only getting you deeper and deeper in whatever belief that you have, whether that's for a good reason or a bad reason. Unfortunately, I think a lot of times it's the latter. How can we look at AI and maybe prevent that? Right. How can we prevent the echo chamber and the algorithm divide that social media has caused? Right. If AI becomes as used or even more used than social media, I think.
A
It'S really about being intentional, about curating the voices that you consume and listen to and amplify when it comes to AI. Right. Like, I am someone who is just always going to be a tech optimist. However, I need to make sure that I'm listening to voices that are critical about AI. Otherwise I know myself, I know that I'm prone to be like, this technology is great. It's going to change all of our lives. No problems whatsoever. That's not great. But I also don't want to be somebody who is only listening to voices that are super skeptical. I think it's tough, and it involves being okay with hearing opinions and attitudes and takes that you're not always going to agree with, but that's a part of it for me. And so I think really being intentional about who you're curating, when it comes to who's saying what an AI, like, really making sure that you've got a healthy, robust diet of folks in the conversation that don't always look like the people that we think of being amplified as leaders when it comes to technology and AI.
B
That's good. Yeah. It's almost like kind of the political equivalent.
A
Right.
B
Like, if you're always watching cnn, maybe you need to turn on Fox News and vice versa. Right. I. I think that's a good call out. Right. Like having to intentionally listen to voices that are maybe against your current grain is. Is something that's maybe healthy for, for a lot of us. Yeah.
A
I, I gotta tell you, I find myself, I guess I'm only coming to realize how susceptible I am to the voices that I'm kind of seeing, surround myself with. Because, you know, I, if I, if I listen to voices that are, and we should be listening to critical voices when it comes to AI, but like, I will believe, I will train myself, talk myself out of use cases that I already know. AI has been really helpful for me personally with. Right. I'll be like, oh, AI can't do that. And it's like, well, actually you do. You use AI for that all the time. Why are you believing that? Like, it can't do this when you use it that way all the time. And so, yeah, I've only come to realize I'm very susceptible to the voices that surround us when it comes to technology conversations.
B
You know, Bridget, you kind of already helped us traverse the main topic of today's episode, right? Like who gets written out of the AI future and a little bit as well, helping us be a little more cognizant of maybe not too many of the same people getting written into the AI future. You know, here we are at the end of 2025. I, I know a lot of business leaders are looking, you know, forward to 2026 and starting to kind of get their AI agenda all in order. What are some other, maybe, you know, either personal concerns of AI specifically when it comes to making sure the people using it are maybe telling a story that is reflective. Right. I, I know none of us know what's going to Happen Right. In 2026, but maybe what are some of those other things? As someone that covers the technology so closely, as yourself, what are some of the other things you're. You're worried about or looking at?
A
Oh, I think one of the things I'm worried about is you, you mentioned AI flop earlier. I am incredibly worried about just the general state of trust as it pertains to AI, you know, approaching in more and more of our media spaces. It's one of the reasons I'm, I'm also weirdly kind of excited about the rise of AI in some of these spaces because I think that people, humans who can make good, thoughtful, trustworthy content are going to be at a premium. So I'm excited for that, for that outcome. But I am worried about the devaluation of trust in our media and online spaces more generally because people just see things that are just badly generated AI and they say, oh, I don't need to pay attention to that. And in a media climate where people have been trained by the ubiquity of bad AI content that they don't need to pay attention to anything, I worry about the ability for good voices to rise. So that's. I don't know if that answers your question, but it's something that I spend a lot of time thinking about.
B
All right, so we've covered a ton in today's episode, Bridget. But, you know, as we wrap, maybe what is your one most important takeaway? When we think and hopefully make actions right about who gets written into the AI future and maybe unfortunately, who gets written out, what's your one most important takeaway?
A
My one most important takeaway is I know I said this earlier, but really just make sure that you're challenging your own idea of who is a leader and who is a voice that you should be following and amplifying. When it comes to conversations in technology more broadly, but specifically AI. Right. There are so many people and so many fascinating stories, activists, advocates, artists who are using technology like AI in groundbreaking ways that really are challenging what we think of as power and who holds it and how it is used. Right. I just left Barcelona for Mozilla's Mozfest and we had a live conversation with Barcelona based activists who are using AI to do inverse surveillance of the government and people in power. And so often we're thinking about AI based surveillance as the government surveilling all of us little people and they're inverting that and flipping it on its head and saying, no, what if we use technology and AI to watch the watchers? Right. And so I would say really challenging what we think about when it comes to who is a leader in technology and AI and how they are using that technology to really shake things up and change the conversation.
B
It's a great way to end today's conversation. Yeah, inverse AI. Love to see it. Bridget, thank you so much for your time and joining the Everyday AI show. We really appreciate it.
A
Thanks so much for having me. This has been great.
B
All right. And if you miss anything, y' all, don't worry. It's all going to be in today's newsletter. So if you haven't already, please make sure to go to your everydayai. Com. We're going to be recapping today's conversation there and a whole lot more. Thank you for tuning in. Hope to see you back tomorrow and every day for more Everyday AI. Thanks, y'.
A
All. And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going for a little more AI magic. Visit your everydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.
Everyday AI Podcast – Who Gets Written Out of the AI Future?
Episode Date: December 30, 2025 Host: Jordan Wilson (B) Guest: Bridget Todd (A), Podcast Host at Mozilla Foundation & iHeartRadio
This episode centers on the critical question: Who gets written out of the AI future? Host Jordan Wilson welcomes Bridget Todd—renowned for her work on technology and identity—to unpack the unseen biases and structural obstacles in AI. Together, they discuss whose stories are being amplified or forgotten in this new technological era, the role of human agency and responsibility in AI, and how to foster a more inclusive, equitable AI ecosystem.
This episode provides a deep, nuanced examination of how AI can inadvertently perpetuate exclusion and what can be done—at both individual and systemic levels—to foster a more inclusive AI future. Bridget Todd emphasizes the need for critical engagement, conscious consumption, and elevation of underrepresented voices. The episode closes with a challenge to listeners: actively redefine who leads, who creates, and whose stories populate the AI-driven digital landscape.