Loading summary
Emily Bender
People aren't meant to be useful. People are people. And the thing about human rights and human dignity is that we are valuable because we exist, period. And we don't have to justify our existence.
Host (possibly Sam or Joseph from Four4Media)
Hello and welcome to the four four Media Podcast where we bring you unparalleled access to hidden worlds, both online and IRL. Four4Media is a journalist founding company and need your support. To subscribe, go to Four4Media Co as well as bonus content every single week. Subscribers access to additional episodes where we respond to the best comments. And they get early access to our interview series too, like this one. Gain access to that content@404 Media Co today. I'm here with Alex Hannah and Emily Bender, authors of the AI how to Fight Big Tech's Hype and Create the Future We Want, which came out last spring. Unbelievable that it was last spring. It feels like yesterday. Dr. Alexander is a writer and sociologist of technology, labor and politics. She's the Director of Research at the Distributed AI Research Institute, also known as D, and a lecturer in the School of Information at the University of California, Berkeley. Dr. Emily Ann Bender is a Professor of Linguistics at the University of Washington, where she is also the faculty Director of the Computational Linguistics Master of Science Program and affiliate faculty in the School of Computer Science and Engineering and the Information School. Welcome. It's so good to have you both. I'm so excited that we're here today.
Emily Bender
So fun to be here and so fun. Like I listen to your podcast, so I hear that intro a lot and it's really fun to like hear it as it happens.
Host (possibly Sam or Joseph from Four4Media)
Likewise, Joseph does a better job of it. He's got the accent going for him, so that's okay.
Alex Hannah
I'm not. I'm not charmed by that accent.
Host (possibly Sam or Joseph from Four4Media)
But you're not here. Sam, thank you so much for being here. I will also Note that you two host the Mystery AI Hype Theater 3000 podcast, which you're going to go record one of right after this, which I'll just quote from the description, deflates AI hype and draws attention to the real harms of the automation technologies we call artificial intelligence Intelligence quotes for the people who are not watching on YouTube. I enjoy hearing how this podcast started, so I'm going to ask you how it started. But I was also a huge mystery science theater $3000 as a kid. I would say it influenced my sense of humor. Maybe for the worst, maybe for the better. Hard to say. I will leave that to other people to say. But yeah, do you want to just before we kind of Dive into that and the rest of your work. Can you just give us a little bit of background? Just kind of the quick how you got here pitch, how you got into your respective fields, how did you get. How did we all get here? And how did you eventually meet each other? Yeah.
Emily Bender
Oh, this is a fun story. Maybe I'll start by saying my background
Host (possibly Sam or Joseph from Four4Media)
and then Alex, and then we can
Emily Bender
get into how we met. But I'm in linguistics, and linguistics, broadly speaking, is the study of how language works and how we work with language. So it is incredibly relevant to the current moment. And I work specifically in computational linguistics, which is about getting computers to deal with language. And Starting in about 2016, I started paying attention to what I now refer to as societal impacts of language technology, sometimes called ethics and natural language processing. And that started in large part because actually a former PhD student in my department who's on the advisory board for our program, Leslie Carmichael, said, hey, you should have an ethics class in that program. And I was like, oh, good idea. And then I couldn't find anybody to teach it. It's like, all right, I guess I'm doing this. And I got organized first, basically by learning from people on Twitter. This is 2016, so Twitter was still Twitter, solidly including Alex, and sort of went from there. I taught that class for the first time in January 2017. Got very concerned with language technology and the ways that language variations of different people speak differently is not being handled on language technology and sort of all the possible downstream impacts of that. And also I noticed in about 2019 that folks in my field were super excited about language models and were making outlandish claims about them understanding. So I started pushing back pretty hard on that, and that led to eventually connecting with Alex. But I'll let Alex tell her background first before we talk about that part.
Alex Hannah
Yeah, I got into this because since the start of my Ph.D. i was. Which I started in 2009, I was really interested in the interaction of tech and society. I did an undergrad degree in computer science as well as sociology. And so I basically found a way to be interested in both of them. And so I was initially interested in how social movements use social media. And so I had done some early work around Egyptian social movements, especially around the Arab Spring, but even prior to that, and wrote my master's thesis on a movement that used Facebook called the April 6th Movement in 2008, and so then kept on doing a lot of work around social media and politics. Became also more interested in computational social science as a way of studying these movements. So an analysis of social media data, trying to understand how movements seem to engage in discourses online. And then I did my dissertation building a relatively small language model which had basic classification about features of protests as mentioned in newspapers. And the more and more I was doing that work, was more and more concerned about surveillance and the ways that tools like that could effectively be used to identify social movement actors, social movement organizations, in a systematic way. So got very interested in investigating the kind of ethical dimensions of machine learning and things that get looped into AI now. And so that got me more into this fairness, accountability and transparency space. Went to the first conference in 2018, but also went to a workshop in 2017 organized by my friend Anna Lauren Hoffman and a few other folks, and then also became really interested in the status of data with regards to AI, quote, unquote. And then may. I'll kick it to Emily to talk about how the podcast got started and how the book got written.
Emily Bender
Yeah. So in 2020, on the strength of having been interacting on Twitter, Alex and some of her colleagues looped me into a group that was working on some papers, which eventually included AI and the Everything in the Whole Wide World benchmark, which is inspired somewhat by the children's book Grover and the Everything in the Whole Wide World Museum. So that was really fun to work on. This is 2020. So we're doing this work remotely over Zoom and those papers. I think we ended up writing three papers together. And then that group was basically done on the academic work. The Zoom meeting stopped and we had a group chat going, and we had a couple other group chats with other colleagues. And many people among those groups were doing this work of debunking AI hype, where we would see hype artifacts and then write tweet threads making fun of them, basically saying, this doesn't make sense because. And then we came across one that was a video of someone giving a talk. And so in one of our group chats, I'm like, how do you do the debunking when it's a video? And our colleague Meg Mitchell says, oh, well, you have to give it the mystery science theater 3000 treatment. So that's where the idea comes from. And then a little bit later, we came across actually a textual artifact, a long blog post that medium estimated as a 60 minute read by Google VP Blaise Aguirre Arcis entitled Can Machines Learn how to Behave? And I'm like, ugh, this would take so long to do, like, bit by bit, you know, as a text Thread. We gotta give this the MSC3K treatment. Who's in? And I have to say, to that point, I had actually not watched the show. I knew the concept, but I hadn't watched it. But Alex, being the big fan that she is, like, I can do it. So you wanna pick it up from there, Alex?
Alex Hannah
Yeah. I mean. And so we decided to. Well, I was like, well, why don't we do, like a twitch stream? That's something people do, right? And so literally downloaded. I forgot what the software was. It was Streamlab Labs or something and downloaded it. And then we tried to do it, and then it immediately ran into sound concerns. And then I taught myself obs on the fly, which I don't really suggest doing that. And then.
Host (possibly Sam or Joseph from Four4Media)
Definitely not.
Alex Hannah
No, it was. It was, I don't know, the most
Host (possibly Sam or Joseph from Four4Media)
impressive thing I've heard yet.
Alex Hannah
Honestly, I don't know.
Host (possibly Sam or Joseph from Four4Media)
Your degrees and positions aside, Learning OBS
Alex Hannah
in 15 minutes to do a live stream, probably one of my greatest accomplishments.
Host (possibly Sam or Joseph from Four4Media)
Yeah.
Alex Hannah
And then learned how to do it, did the broadcast, and we did it, and we only got through about 10 minutes of it. And then we did a second and then a third one, and then I kept on doing it. We got to episode eight, then hired a producer, and then we ported all those things to podcasts and now do a stream and a podcast. And then Emily was approached by an agent to consider a book. After it, was it New York Magazine or New Yorker? I was confused. The two.
Emily Bender
New York Magazine.
Alex Hannah
Yeah. They did a profile on her, and then Emily looped me in, and then we managed to write a book and then write a book all remotely. And the kicker to all this is Emily was in town in the Bay Area to do an event, which is the great Chatbot debate that was held at the Computer History Museum. And it was in March 2025. And after all that, we finally met in person. So basically, closest collaborators, we're up to podcasts. We're recording podcast 74 today. Three papers, a book, and then all that and final in person in March. And now we've got to hang out a lot because we're doing fun book events and talk a lot online for things like this.
Emily Bender
Exactly.
Host (possibly Sam or Joseph from Four4Media)
Love it, Love it so much. Thank you for the quick and dirty rundown. Something that I think is from the book. This is a phrase that you use in the book. Ridicule as praxis. I think that's the ethos of the podcast and the show. Right. That's kind of the idea here. I find that so refreshing because so Much of everything now is really, really depressing. And we need that kind of, like, refreshing perspective that everything is shitty and getting shittier. But to paraphrase another great thinker of our time, but I would say the ridiculous practice concept is really important. I'll just quote straight from the book if it's not too cringe to quote your book in front of you. Resisting a. Resisting hype can also be empowering, grounding, and even joyful. It's empowering to reaffirm the value of our skills and expertise. It is grounding to lean into the value of human to human connection, of being human together. And it can be flat out fun to find the silliest excuses of the hype machine and deflate it with. Deflate it with deflated defeat and deflate with. With ridicule as praxis. I love that so much, and I think that's something that keeps me doing. The work that we do at four. Four is kind of the subtext of all the stories that we're writing is, you know, people are. People are pushing back against this stuff, and people are saying it's still important to be human. We're not just going to roll over and say, sure, the machines can take over. So I would love to kind of hear more about that. What does that mean to you as people who coined that term, but also are doing that work every day?
Alex Hannah
Yeah. So this is great. And also, just to give a shout out to y', all, I did a search through the book in our extensive endnotes. I think there's 55 pages, and I believe y' all are cited 12 times, if I counted correctly. I mean, so we rely. We really, really rely on your reporting, which is fucking phenomenal. Cause it's so important. And y', all, I just, I think, are really punching above your weight for just, like, the size of your team and the amount of. The amount of stuff that you cover. So just, like, kudos to y'. All. And then I think, so, ridiculous practice. I have the honor. Like, I came up with that term, and I think, like. And I'm very proud of it. My. I was speaking at a panel with an author I love, Carmen Maria Machado, and I said, ridiculous practice. And she cackled. And I think that I was like, oh, I can die now. Like, someone whose poetics I really appreciate liked a phrase of mine. We're done. Yeah, but I think that the kind of ethos of it is, like, so much of what we encounter, and I'm sure y' all get this as well, like, it's really depressing. It's just, you know, like, these are. And in my lowest moments, I'm like, these are the worst people, people of the world that have the most money that has ever been had. And how does it feel? What can we do right now? And we get this question a lot too, which is, how do you actually slog through this? This stuff is terrible. And it is like, you gotta make fun of it. You have to really engage in it and really engage in creating joy with other people and really sorting through it. Cause otherwise it's just, you know, just infuriating. And it is really maddening. And it's. People just. I mean, like we do in the podcast, people who are in these fields just say the stupidest shit. Like they're. They are making the most ridiculous claims. We had Adam Becker on the show, and hopefully the podcast will hit with him, will be out before this one is out. I don't know the timelines, but basically Adam is a physicist, he's got a physics PhD and he wrote a great companion book, More Everything Forever. And one of the things that we were talking about is data centers in space. And the claims are made on the SpaceX site were like, space is great to put a great place to put data centers because it's cold. And it's like, okay, it's cold, but it's also a vacuum. And vacuum is a fantastic insulator. And you're not going to have this natural diffusion of heat or disruption, dissipation of heat because of coldness of space. And the thing is, yeah, Elon Musk makes these claims because people think he's smart. But no, it's ridiculous and it ought to be made fun of and we need to punch up and use humor to deflate these claims. So, I mean, ridiculous practices and ethos has worked pretty well and keeps us sane as well.
Emily Bender
For sure. For sure. We do periodically these episodes that we call all hell episodes, where we go through 25 to 30 things, rapid fire. And on the one hand, it's a lot of terrible. On the other hand, it's really cathartic to sort of like go through all of that and laugh at it. What's been really important to me about the podcast is the way it has sort of catalyzed a community. We found that the people who listen, especially our livestream viewers, often tell us they thought they were the only ones. Everyone around them seemed to be taken in by this and they felt very isolated. And so by basically planting a flag, you know, and Doing our ridiculous praxis, we've sort of created some ground for people to come together and meet. And that has been awesome. And I'm reminded of I gave a talk at UW a couple of weeks ago to a large audience, including students and members of public and faculty. And this one student said, how do I refuse and resist without being a stick in the mud? I'm like, no, be a stick in the mud. Right? Because if you're a stick in the mud, you're sort of creating space that can sort of solidify solid ground that other people can come stand on with you. And I think that that's like the serious back end of the ridiculous praxis is that we are saying no, we're standing firmly on our understanding of truth here, so firmly that we can make fun of them and you can come join us here.
Host (possibly Sam or Joseph from Four4Media)
Love that so much. Yeah. Get a little eco culture restorative marshland in this thing. Be a stick. Be a stick.
Alex Hannah
Exactly.
Host (possibly Sam or Joseph from Four4Media)
I love that so much. Yeah. So I guess just to back up a little bit, because maybe there are people. I feel like 404's audience is familiar with a lot of these topics because of what we write about. But for people who are listening and are like, what are they talking about? What are these terrible things? Why do they keep saying air quotes around AI? I guess maybe the first question that we need to kind of establish is like, why do we keep saying AI in air quotes? Why is AI part of AI hype as a marketing ploy?
Emily Bender
Yeah, so it is a marketing term. One of the things I've learned from Alex, as a sociologist, is always be historicizing. So if we go back to the origins of the term, which is part of a 1956 grant proposal by John McCarthy and colleagues, so written in 1955, where they were trying to get some money to basically hang out with some friends for the summer and do some things they wanted to work on. And so they needed a word that they could apply to sort of loop it all together. And so they called it artificial intelligence. And we actually have a fun sort of throwback episode of our podcast where we go through that document and apply ridiculous praxis. But it basically, from the start was basically a way to say, give us money. And it's doing the same thing now. And the fact there's sort of two ways that the term itself is doing the hype. One is that it draws on notions of intelligence that is something that is supposedly shared between humans and machines, where you can rank people and then rank the machines and there's a whole horrible history that we can get into over there. But also when you lump together chatbots and image generators and license plate readers and protein folding algorithms and chess playing engines and so on, as one thing, then it sounds like it is one thing that is in quotes smart. And in quotes getting smarter. And the sort of illusion of cognition or intelligence that we get from the synthetic text coming out of the chat bots is then also papered over everything else. And it starts to seem like maybe it would be a good idea to use this in quotes. Artificial intelligence to make consequential decisions.
Alex Hannah
Yeah, and I think that's the. It's very helpful. And we're following a few other folks here. I mean, there's a great essay from Emily Tucker from the center of Privacy and Technology at Georgetown, and the essay is called Artifice. Is it artifice and intelligence? I think it's artificial intelligence. And it's effectively why they're saying they're not going to use the term AI or even machine learning. Basically because they want to be very exact about the technology that they use. This is a place that has produced some really helpful things like the perpetual lineup where it's focusing on facial recognition. And I think there's a way that it's pretty helpful to distinguish between this and. I mean, if we talk about something like Flock and say Flock is AI enabled. Well, what is Flock actually doing? Well, Flock is an automated license plate reader. Right. And then if they have a partnership with, with ShotSpotter, I don't know what ShotSpotter's new name is. What is that doing? It's audio classification of trying to distinguish between gunshots and fireworks or whatever. And so it's helpful to understand what those are because it gives you a little bit more insight about what the technology is and what it can and can't do. And I think the companies have done a lot of work to paper over those differences. And just to say you can feed any kind of modality into ChatGPT and it's going to tell you what this is and it really obscures what's happening under the hood. And for some things we know what's happening under the hood, like large language models, we're assuming that multimodal things are basically doing this kind of pattern matching and then doing some kind of transformation. But it makes sense to distinguish because it helps us be more precise about where to critique and how to critique and also to interrogate what kinds of automations could be desirable if constrained in a certain way and what kinds of automations should not be used in any way, shape or form?
Host (possibly Sam or Joseph from Four4Media)
Yeah, for sure. Yeah. And I think a lot of the insidiousness of AI when it's used by these really big companies such as Flock, that are trying to gain footholds wherever they can, especially in the us, it makes it all sound very inevitable and very just like Flock is everywhere, AI is everywhere. What is AI? We don't know. But you know, it's in Flock and Flock is in your town. And I think Flock especially is a good example of that not being true necessarily, because people have fought back against it and kicked it out of their towns, kicked it out of their counties, or not allowed it to be partnered with local police and their counties to begin with. Because they feel empowered by knowing what it does, knowing that they don't want that and that they know that it's an overreach and an overstep that they don't want in their communities and saying, no, we're not going to fund this, it's not allowed here. Which I think is the work of AI marketing as an inevitable sort of like all encompassing thing is probably one of the more scary things to me.
Advertisement Voice
Hiring used to be all about resumes, where someone went to school, what companies they worked at. But now more teams are realizing that doesn't always tell you who can actually do the job. That's the thing I've realized. You need people who are adaptable and sometimes their school or work experience doesn't actually feel that relevant. It's more about what they're able to do when they're inside your company. That's why skills based hiring is becoming such a big trend. You end up with people who can actually perform, not just look good on paper. Well, if you're an employer who's adopted skills based hiring, the best way to ensure that your applicants actually have the right skills is ZipRecruiter. ZipRecruiter recommends smart screening questions to help you hone in on that perfect match for your role. And right now you can try it free@ziprecruiter.com 404media what makes it work is how efficient it is. ZipRecruiter's matching technology finds qualified candidates quickly so you're not wasting time digging through irrelevant applications. You can add screening questions directly to your job post to filter for the right skills and even see which candidates are recently active so you can prioritize people who are ready to move. There's a reason ZipRecruiter is the number one rated hiring site on G2. Let ZipRecruiter help you find amazing candidates with the skills you seek. Four out of five employers who post on ZipRecruiter get a quality candidate within the first day and now you can try it for free@ziprecruiter.com 404media that's ziprecruiter.com 404media meet your match on ZipRecruiter when you start something new, whether it's a business, a side project, or even this podcast, you suddenly have to do everything. You're the product person, the marketer, the designer, customer support. Maybe you're even fulfilling orders all at once. And honestly, that's the part no one really prepares you for. When we first started, it felt like every day there were 10 new decisions to make and no clear system to manage any of it. That's why having the right platform matters for millions of businesses, including ours. That platform is Shopify. Shopify is the commerce platform behind millions of businesses worldwide and about 10% of all E commerce in the US from huge brands to people just getting started. You can build a professional storefront with ready to use templates that match your brand, Launch email and social campaigns so people actually discover what you're selling and manage everything. Inventory, payments, shipping, analytics all in one place. Plus, if you ever get stuck, Shopify has award winning 24. 7 support. It's basically like having a business partner built into the platform, helping you go from idea to something real. Start your business today with the industry's best business partner, Shopify, and start hearing. Sign up for your $1 per month trial today at shopify.com media. Go to shopify.com media. That's shopify.com media.
Host (possibly Sam or Joseph from Four4Media)
Whether you were on the AI Inside podcast with Jason Howell and Jeff Jarvis recently, I think it was a couple months ago. Maybe it wasn't that recent. It was right after the book came out. But you talked about. I was just like I'm going to do some more historical research and catch up on what they've been up to before we talk today.
Alex Hannah
Appreciate it. Some people don't even read the book.
Host (possibly Sam or Joseph from Four4Media)
They should for many reasons, but especially if they're going to talk to you. But. But yeah, I was pleasantly surprised to hear one of the things that was talked about on that show was a story that I wrote in 2024 about bards and sages and I think that's an interesting micro micro example of AI being talked about and then being pushed back against and used in a way that is just like kind of runs over everything else. And why you might say, oh, we don't want this. It's destroying things that we actually like and value. So in that story, if, if people aren't familiar bars and Stages was a. It's now defunct, which we'll get into. But it was a small indie press. And the founder, the quote that they pull in this, in AI inside is the problem with AI is the people who use AI, they don't respect the written word. Is what the founder of Bards and Sages who's talking about shutting down their company, their beloved indie press, they're people who think that their ideas are more important than actual craft of writing. So they churn out all these ideas and enter their idea prompts and think the output is the story. But they never bothered to learn the craft of writing, which of context is kind of pretentious in itself. It's like someone starts talking about the craft of blank. I'm kind of like, okay, what are we actually talking about? But you correctly note during this conversation that this is out of context. And it's not just some bitter AI hater. This is someone whose work who coincidentally does speculative fiction and role playing games, shuttered their publisher after 22 years. And the final straw was this influx of AI generated submissions. So I would love to just kind of touch on the idea because this is something that we still hear all the time. This idea that AI will democratize art and writing. I know. And that it's that people who can't paint will suddenly be given the gift of art by using midjourney. And people who can't write, it's like everyone should be, you know, given the ability to write at the same level as everyone else. And you should let your ideas be democratized by AI. So let's unpack that a little bit.
Emily Bender
Yeah, I have a couple of stories there. First of all, I am now allergic to this word democratized, because democracy means shared governance. Right? Sharing power. And that's not what's happening here at all. But also, if you really wanted to make art broadly accessible as an activity, then you would take action in society so that people had leisure time to develop artistic skill and to follow their own artistic passions, which is also not what's happening here. And the very recent story that I want to tell is that my art form of choice is photography. I am not very good at creating images with my own hands. I did take a cartooning class in high school, which is now decades ago, but I had an idea for a cartoon a couple nights ago. So I sketched it out and posted it on Blue sky and Macedon today. And I can give you the link for your show notes if you want. But it's really. The art is awful and has the benefit of being clearly done by a person because that wouldn't have come out of a machine. But somebody on Mastodon said, oh, well, you should feed it into one of these systems so you get a better version.
Host (possibly Sam or Joseph from Four4Media)
Mastodon. Average Mastodon.
Alex Hannah
I know. Average Mastodon experience. Yeah.
Emily Bender
But I think that sort of speaks to this larger thing of we can't be bad at art or writing. That's not good enough. That's not okay. I think we see this in education too, where students feel like they have to turn in something polished. So they turn to ChatGPT or whatever. And the universities are like creating subscriptions, which is ridiculous, so that what they turn in looks more polished. And they are missing out on the chance to actually hone their own voice and learn the craft of writing. How to take your ideas and. And turn them into something that is persuasive or enjoyable or whatever kind of writing you're doing and doing it in a way that is not aiming towards this sort of LinkedIn corporate mean, but actually respects the voice and perspective and
Host (possibly Sam or Joseph from Four4Media)
experience of the writer.
Alex Hannah
Yeah. And just I want to add on to that. I mean, again, I'm going to bring back being on this panel with Carmen and also Umair Qazi from the Authors Guild and Wahinivara, who is another author and technology reporter. And Carmen, you know, was talking about the experience of teaching writing and I mean, teaches at a very prestigious writing workshop and kind of the craft of writing. I meant. Sorry, I'm going to use craft. I actually like the word craft. I both hate.
Emily Bender
That's okay.
Host (possibly Sam or Joseph from Four4Media)
I'm with the word craft.
Alex Hannah
I both hate. Right. I both like the word craft because I think it describes certain things. But I also have an essay from a Palestinian writer who talks that's called like, craft is a lie. So anyways, love, hate, relationship. But so, yeah. So I mean. And what she was talking about is basically like. Yeah. I mean, it's not democratizing this. It is. If you're learning how to write, you have to go through the pain of writing. You have a sense of taste, and then you cannot match that sense of taste until you practice quite a lot. And I'm looking at Emily's cat that's in the frame. Yeah. So basically, it takes some time. It takes practice. And the thing that is very Upsetting to me is when a lot of these folks use disability as an example. It's like, oh well, disabled people now can do this and now can create. I'm like, disabled people have been creating art for millennia. I am autistic, I have adhd. And I'm like, there are strategies for sure and they're going to be different for every type of different person. But it's like democratizing is not going to get you there. I mean, cultivating a craft, if you're serious about it, people have been doing that for so long and it is about developing those things that are unique and understanding what about you is going to really make your voice and your style and your expression really individual to you. The second part of it is like, don't you want to connect to somebody through this really unique experience? Isn't that the idea of art? It's not to go viral on whatever. But I mean you are now moving from art to the views and whatever those views mean to you. Content. Yeah, it's a move to content. The word content is the. Just melts my brain every time I hear it.
Host (possibly Sam or Joseph from Four4Media)
No, I'm with you. Yeah. And I think that's. Those are all such great points and really get to the heart of why the so called democratization of it is such a weird argument. To me, the point of being bad at things is that you're working through and you write about this in Aicon also the critical thinking required to get to the point where you want to be and it might not even be where you thought you were going to go. And I think as writers we understand that because that's a huge part of. I mean it seems part of the process for me is sitting down. I don't even want to really read or be influenced by a lot of other similar writing. If I'm going to sit down and seriously write something that I want to be original, it's like I need to sit down and have this come out of my brain raw first and then go in and kind of draw other inspiration. But yeah, it misses the point of art, of craft. To bring it back is to have this kind of perfect, polished. It's not even perfect, it's shitty. This mid output that will get an A on a, on a test or that will get you a bunch of views on Twitter or YouTube or whatever that just ends up being slop. I think it's also, it's part of the experience and the. I mean in my most conspiratorial and like when I go somewhere really dark about AI in general is. I feel like it is the point to be very isolating and to kick the legs out of that critical thinking aspect of everything. It's like they don't. They meaning like the people who are in power, authoritarians and fascists, want. Do not want us thinking critically about art, about anything. And they want us to be very isolated and alone in that experience. So like you said, it's like not to not be the light for someone else's light. It's you. They want you over here talking to ChatGPT endlessly about your own thoughts and delusions.
Alex Hannah
So, yeah, I think one could go very Foucaultian with it and really think about, you know, why do. Why do schools resemble prisons and.
Host (possibly Sam or Joseph from Four4Media)
Oh, yeah, I mean, it's a different podcast.
Alex Hannah
Yeah, yeah, I mean, we could. I'm not. I'm not going to go down that rabbit hole, but I mean, it is very isolating and I think it's the answer to isolation is more engagement with our product. And. And there's been two quotes come to seams. And the one that I used to use in the talk about this book is. I think I still use it, but it's a quote from Mira Moradi, used to be CTO of OpenAI, when she is literally on stage at the Dartmouth School of Engineering and she talks about creative jobs being lost and she says, well, but maybe those jobs shouldn't have existed in the first place. And so it's very indicative of the worldview. And then. Yeah, and then the other one is Zuckerberg, which I think this was last year, where he said something like, well, humans, the average human has three friends and we actually have the capacity for something like 15. And I have no idea where these he's coming from.
Emily Bender
Capacity, but demand, right?
Alex Hannah
Yeah. And so like AI can fill that. That void or whatever. And the worldview is so bizarre, whether it is Texas content or relationships as transactional, it really is showing this particular view both as, I think, a particular sort of technologist and just a really terrible human being. And that's shaped either by one's just like complete. I don't know if they were born fried like that, or it was through continual interaction with the capitalist tech machine. I mean, it's probably little of a little of B, but it's just really upsetting to hear them say that in that context.
Emily Bender
MJ Crockett says some really lovely things about the idea of automating empathy because it's always sold as basically, it's too much Work to do, empathy for other people. So let's automate that so we relieve ourselves. And, and you know, Crockett's point is what a horrible way to look at people. Right. And what a horrible way to think that we don't benefit from being the empathizer and building those relationships. And yeah, it can be difficult, but it's valuable. And it's not just altruistic that we're doing something for somebody else, but actually being in that relationship and connection is super important.
Host (possibly Sam or Joseph from Four4Media)
Yeah. Yeah, for sure. And that's such a. I mean, empathy as weakness, as unnecessary. That is a right wing, far right wing talking point. And I just, I hear that more and more. And every time I hear it, I get more. It's like alarm bells are ringing that empathy is not necessary. And I think it also, it plays into the idea that this, the AI hype is part of a dehumanization engine where, like, we're. The role of dehumanization is very important in the project of authoritarianism and fascism. Yeah. So it's being pushed in a way that like, it's like, like this. I've mentioned this so many times in the last, like two weeks, ever since he said it. But when Sam Altman was like talking about how training AI was like training babies or something.
Alex Hannah
Yeah.
Host (possibly Sam or Joseph from Four4Media)
And he was like, oh, well, humans need the totality of billions of years of humanity and billions of people and, you know, millions of years of humanity. And we need to, you know, that all goes into making a human. It's like, what the fuck do you think goes into making AI? Like your LLMs are based on millions of years.
Emily Bender
And on top of that. And I think this is where you're going by bringing this up. Like the idea that we should look at people in terms of what it costs to raise them in terms of energy to 20. And he says you're useless until you're 20. Which feel bad for his kid who's 20. But also, people aren't meant to be useful. People are people. And the thing about human rights and human dignity is that. But we are valuable because we exist, period. And we don't have to justify our existence.
Alex Hannah
Yeah. Paris Mars had this piece on his blog called Sam Altman's Anti Human Worldview. And the subhead was, OpenAI CEO downgrades humanity in Pursuit of Gold Submerged with Computers. There's lots of problems with the concept of humanism. And many folks have, and we talk about this in the book, basically, like humanism has left out quite a lot of people like, you know, people in the majority world, you know, that have been seen as not human. And at the same time, the far right has just completely jettisoned with any kind of notion of human because they've been very fine with saying, like, well, it's okay if, you know, we have machines, but it's a really, like, it's definitely a thought of like, the only humans that really matter are people that create value. And that's the richest, however, 500 people in the world or whatever. It's really dark.
Host (possibly Sam or Joseph from Four4Media)
Yeah.
Emily Bender
And just to add quickly, Altman's been at this for a while. I coined the phrase stochastic parrots in the process of writing that paper with co authors in late 2020 and sometime after ChatGPT was released. So sometime after November 30, 2022, Altman tweets, I am a stochastic parrot and so are you. And he's basically saying, I need to raise up the synthetic text extruding machine to something that I can call equivalent of humans. And the only way to do that is to basically view everyone around me and ostensibly also myself as nothing more than a synthetic text extruding machine. And sometimes people will describe that as, I have Twitter beef with Sam Altman. I have no reason to believe that Sam Altman knows who I am, but I do have quite a bit of beef with how he uses my phrase.
Host (possibly Sam or Joseph from Four4Media)
Fair enough. Yeah. And that idea that it's like humans are equivalent to machines and machines are equivalent to humans, that kind of like that two sided coin that is being promoted and attempting to be normalized. I think I see it more and more from, from people who are attempting to normalize it. And that's another one that kind of. It freaks me out because it's. I don't think, I think people think that they're being smarter, edgy, when they're like, oh, people are. People are just humans. It's like this is. You're doing the work of the people who want you to say this kind of shit. So just, yeah, think a little harder about what you're, what you're saying. You know, humans aren't machines. And the people who are invested in you being a machine want you to say that you are just a machine. It's something that I think people would need, need more defenses up about perhaps.
Alex Hannah
Yeah.
Host (possibly Sam or Joseph from Four4Media)
Yeah. So is there anything that you've, I'm curious, and this is maybe not to put you on the spot too much. Is there anything that you've changed your minds about over the years? Like Is there anything that you look back and you say oh my thinking has evolved or is there anything that you look back and say oh, I was exactly fucking right about that.
Alex Hannah
I mean there's, there's a lot of that.
Host (possibly Sam or Joseph from Four4Media)
Yeah, yeah.
Alex Hannah
I mean it's, I. But I mean it's not, whatever it's not exciting to say like well I was Cassandra and now this whole edifice, whatever that's unexciting because that's not staking a positive vision. I will say probably and this might be some place where Emily and I might have some daylight. But if there's a highly constrained problem and there's particular if you're using language models or large language models as a pre training device for a very well evaluated problem that might be some places where there is but it does not absolve basically any of the training data theft, any of the energy consumption, all these other problems that are in data center in large language model construction. And so here I'm a little bit convinced more by some uses of large language models by there's an organization I sit on the board of which is called Human Rights Data Analysis Group. They have a very constrained problem where they're taking unstructured police reports and they are basically doing slot filling exercises with them and they are evaluating the accuracy of that. But I will say I still don't quite sit with that well, just because of everything else that had to happen to get to that place. But if there's anything that is that well evaluated, possibly, but otherwise a lot of the things are confirming what we basically thought would happen. And I am probably Emily doesn't like to take the prediction path but I'm more fine with it. One of them is, I think in chapter three in the book we're like is there going to be an ads in ChatGPT? Because that's the only thing that's been able to be monetized on the web. And, and lo and behold we see advertising strategies from everybody. But I'm curious what Emily has to say on this.
Emily Bender
Yeah, I think in terms of your use case Alex, certainly. And speaking as a language technologist here, the underlying technology of representing words in terms of vector space, which words co occur with other words, that's very powerful and can be very useful. It is sad to me that the way this has become usable technology by many people across society is mediated by these large companies who have all of the liabilities that you're talking about plus no transparency about the training data, plus the fact that the models could change you may have evaluated it as carefully as possible, and then the next time you get to the API, you're using a different model.
Alex Hannah
So those concerns, I will say in their use case, they are basically doing open weights models, so they have full control over all that stuff.
Emily Bender
That last bit about the model changing is better in terms of where I've changed my position. I'd like to go back to the stochastic parrots paper, actually, which was written in late 2020, and we were sort of looping together the. It was a survey paper, right. Talking about the various problems that we saw with the push to make language models ever larger. And there are two things that I think we really missed in that paper. The first is that we did not connect to what was already known, even about the data work that goes into creating these systems. And that has been an enormous issue. It's only gotten bigger. So that's something that's missing from that paper. And the other thing is that in the section where we actually introduce the phrase stochastic parrots and we talk about synthetic text, we thought we were a little bit on thin ice there because it seemed unlikely in September, October 2020 that many people would get excited about the idea of synthetic text. And I continue to think it's a terrible idea. And the various downstream harms, including the ones we talked about in that paper, are very much happening. But I was very wrong about how much people would get excited about it.
Host (possibly Sam or Joseph from Four4Media)
That's wild. That's. That's actually really wild to think back on, to think people would not be. At the time you thought people would not be that. That hype about something like ChatGPT, which then came out, what, like three, two years later? Two years later. Two years later. Wow. It's been a long six years. Okay, I will give you one last question and then I will cut you loose. I'm sorry, I told you it would be a. A light jaunt. And I feel like it's been a hard sprint, but it's been so fun. We are used to it.
Alex Hannah
It's totally fine. And when you like the conversation like this is fine, like, it goes fast. Yeah.
Host (possibly Sam or Joseph from Four4Media)
So, I mean, my last question is a fun one. It's like, what does. What makes you hopeful? And we touched on this a little bit earlier. What makes you hopeful about this space as again, people who are in it and immersed in it every day.
Emily Bender
Yeah. To me, it's what I'm seeing coming from young people. There's a medium post, I think, in Emily Tucker's medium written by a high school student who basically is talking about how important it is to denormalize all the surveillance. I was just listening to a wonderful episode of Paris Marx's podcast about the Luddite Club. So a bunch of students who were intentionally disconnecting from tech to spend time together. And I think that every time we see pushback of any kind, like that enables more pushback, Just as you were describing earlier, Sam, people feel empowered to push flock out of their communities. The more they know about what it is, the more they see other communities have done this.
Alex Hannah
Yeah, I mean, in the same register, I think the resistance, I mean, individual resistance, I think is huge. I mean, this panel I keep on coming back to was packed. I mean, it was 250 people with people at the door at this writing conference, writing and teaching conference. And we're also seeing young people turn to analog and to write by hand or write not connected to any of these tools or calling out slop. I mean, I am nominally on Instagram and TikTok. I'm more than nominally, but I'm on those platforms enough to say people's. If it is clear that it's slopped or like slop, I don't want to engage with this. There's that, there's the collective action, there's groups that are forming. The data center resistance is kind of wild. The data center resistance that's happened that's been so just across the political spectrum. My mom lives in rural Ohio, and there's people in rural Ohio that are going to meetings and saying, why are we letting this into our community? I'm worried about my water, my electricity. I'm worried about pollution, everything and deep red places. So that's very heartening. And I think the thing that's heartening about it all is that there is now connections that are being made. So there's the environmental connection, the labor connection, the consumer protection. We're talking about all these kind of vagaries of capitalism. But tech is really the wedge that you get into there and you're making different connections about who the power brokers are, what they want, how they view humans, how they view the natural environment, how they view the workforce. And people are making those connections and it becomes the basis for broad organizing coalitions. And that's fantastic.
Host (possibly Sam or Joseph from Four4Media)
Yeah, awesome. I completely agree. And that's also what's giving me hope, is seeing, like you said, the ways that people are saying even in unexpected places where people are coming together and saying, hey, this is not cool.
Emily Bender
I'm at risk of being cringe. I think I want to add in a shout out for 404 Media.
Host (possibly Sam or Joseph from Four4Media)
I think that, oh, that's not great.
Emily Bender
The success that you all have had. I mean, you took a great leap starting that. And the fact that the kind of reporting you're doing is sustainable really shows that there's a community that cares.
Host (possibly Sam or Joseph from Four4Media)
And I think that's important.
Alex Hannah
Yeah, 100%.
Host (possibly Sam or Joseph from Four4Media)
Thank you for saying that. Yeah. And it's the work that we do. It's a two way street. So we can't do this kind of work without people like you doing this work. So it goes both ways. Thank you. Okay, before we get into too much of a love fest, I will cut us off there. Thank you both so much for being here. So appreciate your time and I will play us out. As a reminder, 404 Media is a journalist founded and supported by subscribers company. If you wish to subscribe to 404 Media and directly support our work, please go to 404 Media co. You'll get unlimited access to our articles and ad free version of this podcast. You'll also get to listen to the subscribers only section where we talk about a bonus story each week. This podcast is made in partnership with Kaleidoscope and Alyssa Midcalf. Another way help us out by leaving a five star rating and review for the podcast. This has been four four Media. We'll see you again next.
Date: March 23, 2026
Hosts: 404 Media (Sam, Joseph)
Guests: Dr. Emily Bender (University of Washington), Dr. Alex Hanna (Distributed AI Research Institute)
Main Theme:
The episode features a rich conversation with Dr. Emily Bender and Dr. Alex Hanna, co-authors of AI: How to Fight Big Tech's Hype and Create the Future We Want. The guests and hosts explore how “artificial intelligence” is leveraged as a marketing ploy, the harms and silliness of AI hype, why ridicule is a tool for resistance, and what it means to create space for critical, human-centered discussion about automation technologies.
[02:41] – [09:47]
[09:48] – [15:41]
[16:14] – [19:43]
[26:18] – [32:49]
[34:49] – [39:36]
[39:37] – [47:32]
On "AI" as a buzzword:
“It was basically a way to say, give us money. And it's doing the same thing now.”
— Emily Bender, 16:16
On ridicule as a tool:
“Ridicule as praxis... It's empowering to reaffirm the value of our skills and expertise... and it can be flat out fun to find the silliest excuses of the hype machine and deflate it.”
— (Host quoting the book), 10:47
On dehumanization:
“People aren't meant to be useful. People are people. And... we are valuable because we exist, period. And we don't have to justify our existence.”
— Emily Bender, 00:00 & 36:27
On the false promise of "democratized" art:
“I'm now allergic to this word 'democratized,' because democracy means shared governance.... that's not what's happening here.”
— Emily Bender, 26:18
On individual resistance becoming collective:
“Every time we see pushback of any kind, like that enables more pushback.”
— Emily Bender, 44:47
On community:
“By basically planting a flag and doing our ridiculous praxis, we've... created some ground for people to come together and meet.”
— Emily Bender, 14:25
This episode is essential for anyone wanting to understand the cultural forces shaping “AI,” resist the inevitability narrative, and find community in skepticism. It reminds us that we don’t have to accept technological systems as they’re sold to us—and that laughter, resistance, and community are vital tools for a better future.