Loading summary
A
Lemonade.
B
I'll be starring in the new Walt Disney picture, Tron Aries. Tron aries, in theaters October 10th. I let Disney scan my body for that movie. Did I up.
A
Under what rights are they allowed to use it?
B
I didn't read the contract then.
A
Yeah, you did. Sorry.
B
As a future licensed trademark of the Walt Disney Corporation, I was curious to learn how artificial intelligence will transform my humanity into shareholder value. So I decided to talk with Karen Howe, author of the new book Empire of AI. It explores how companies like OpenAI, Microsoft, and Google are behaving like modern empires, seizing and extracting natural resources, exploiting labor, and justifying it all as a civilizing mission to modernize the world. They basically do everything empires do except bomb brown countries. At least not yet. Now, this sounds bad, but it also feels, I don't know, just inevitable. You know, anytime new technology comes along, most of us just shrug our shoulders. We're just like, all right, I guess I'm letting the Burger King app scan my iris. But according to Karen, it doesn't have to go down like this. With more democratic control, we could focus AI development on things we actually need, like health care, education, clear air, and most importantly, in my opinion. Can we please get some robot butlers? Hurry. Right away. No delays, are there. Make your daddy glad you have had such a lad. You wrote this book called Empire of AI but specifically to get more specific, I want to talk about the three major camps that exist with A.I. yeah, I feel like you got the A.I. optimists. It's here. It's coming. This is a revolution. You got the AI skeptics.
A
Yeah.
B
This is the end. Then you have the AI haters. This is overblown. This is stupid. This is dumb. Stop. Which camp are you in?
A
Kind of. Not any of the three.
B
Okay.
A
I would say I'm in the AI accountability camp, which recognizes that the AI industry has consolidated an extraordinary amount of power. And that's a nod to the title of my book, where I call these AI companies new forms of empire and how much economic and political power they have, really concentrated. And I'm not a skeptic in that I do think there are many types of AI technologies that can be profoundly beneficial. But the type that Silicon Valley has decided to invest in, which is these colossal models that they brand as general everything machines, is not the path that we should be focused on.
B
I want to get into this idea of AI as an empire.
A
Yeah.
B
But before we get there, let's talk about what you're mentioning here a little bit which is there's a lot of people in the media and on the Internet and specifically a lot of dudes talking very confidently about what AI even is and what these machines are doing.
A
Yeah.
B
What can we confidently say? What can we be skeptical about? And how big is the. I don't know, Delta? Because that's where I'm at. I think there is just a huge. I don't know. And that Delta is, like, massive, literally. Why the show is called Hasan Minha doesn't know. Like, there's a lot I don't know. So you tell me where. Where are we at? Because you seem to be very measured in your approach with this.
A
Yeah. So I think we kind of have to do a little bit of a history lesson.
B
Yeah.
A
So apologies in advance, but AI has been around for a really, really long time. It was founded as a research field in 1956 at Dartmouth University by a group of professors and researchers. And the original intent was to recreate human intelligence in computers. Also, when the term artificial intelligence was coined, it was specifically coined by this assistant professor at Dartmouth called John McCarthy, who years later said, I invented this term because I needed money. So they were trying to get funding for work that they were actually already doing under a different name. And I think this is kind of a really key point for understanding the craziness of AI discourse today. First of all, there's just so much anthropomorphization of this technology because of that original pegging of this field to the idea of intelligence. And the problem with doing that is there's no scientific consensus around what human intelligence is. So if you're trying to recreate that in computers, you're going to run into a lot of problems where. How do you measure, if you've done that, what is the basis of our intelligence? And therefore, how do you recreate it? What should it look like? Who should it serve?
B
Yeah, I love that. And I. And we were chatting before this interview started. I was with, you know, the team, and we were talking about this idea of when technology comes out, it is always given this human interface.
A
Yes.
B
And the joke that I had was, I don't know if you remember, in the 80s and 90s, there were all these movies about, like, if you turn the TV on, the TV's going to suck you in. Or Tron, hey, you turn, you're going to enter the mainframe.
A
In.
B
In technology, in humans, there's always this thing of, oh, they have to be like us. When in reality you're like, hey, this is just a Bunch of wires and copper and microchips. It is not a human being. It doesn't have a soul. So I take your point of there is no consensus around what does it mean to be an intelligent human? Or what does human intelligence even mean? I'll tell you this. There's not a lot of intelligent humans. I know. So I totally understand why you're like, there is no consensus. Yeah, yeah, that's a wild joke. Um, but. But if I'm correct me, if I'm hearing this right, you're saying a lot of times this idea of human intelligence is given the framing of, oh, it beat a human being at chess.
A
Right. Exactly.
B
This Excel model faster than a human being.
A
Exactly.
B
But does that mean. Is that the totality of human intelligence?
A
Exactly.
B
Am I hearing that right?
A
Yeah, exactly. It's like, okay, can do certain tasks better than humans, but what's the significance of that? And ultimately, what OpenAI, as a company tries to do is create this rhetoric where when you recreate human intelligence, which apparently they are going to do soon, so they say that it's somehow going to have, you know, profound utopic consequences. We are going to have so much abundance in the world, so much prosperity in the world, that this intelligence is going to help us solve cancer, it's going to help us solve climate change. And that is based in, you know, a fiction of how this technology actually works.
B
The book is obviously called Empire of AI. Why is AI like an empire? How is it like an empire? And like most empires, when will it start bombing brown countries?
A
So in the history of European colonialism, empires of old had several features to them. They laid claim to resources that were not their own, but they redesigned the rules to suggest that they were their own. They exploited a lot of labor. They didn't pay that labor, or they paid that labor very little. And empires were always in competition with each other. So the British Empire was always saying, we are better than the Dutch Empire. And the French Empire was like, we are better than the British Empire. So there was this concept that there were evil empires and there were good empires. And the reason why the good empires had to be empires is, is because they needed to take down the evil empire. They needed to be strong. So that's why they're extracting all these resources. They're exploiting all this labor to fortify themselves. And they're doing it ultimately under the civilizing mission of, we are doing this for the benefit of all of humanity. We are actually bringing religion to all of these heathens and giving them an opportunity to access heaven instead of hell. Empires of AI not only literally use this rhetoric now, but they check off all of the characteristics of empire building. They lay claim to resources like the intellectual property of artists, creators, writers, and then they redesign the rules to say, well, this is actually fair use to train our AI models on this. And then they exploit a lot of labor, both in that they contract workers in primarily global south countries, going to the brown countries and paying them very, very little amounts of money to take out toxic, abusive, sexist, racist speech out of their models. And also in the sense that they are ultimately creating labor automating machines. So the. The definition that OpenAI officially uses for AGI is highly autonomous systems that outperform most humans in economically valuable work.
B
Got it. Okay. So things that drive shareholder value.
A
Exactly.
B
Sure.
A
And so you can imagine if a worker is going to the bargaining table and sitting across from a CEO, and both of them think, wait a minute, if I resist, or wait a minute, if that guy resists, I can just hire an AI instead. That worker can't bargain for rights anymore. So that's the labor exploitation that's happening under empires of AI. And they have this aggressive competition where OpenAI frames itself as we need to be the good guy. So bad guys can't create AGI.
B
Yeah.
A
All under a civilizing mission if they're doing it for the benefit of humanity.
B
Sounds so common to. You know, I'm 39 years old, so I heard this with the Web 1.0 and the Googles and all that sort of stuff. This idea of we're going to do good in the world.
A
Yeah.
B
One, there's this quote by Sam Altman in 2013 that I want to take a look at, which is the most successful founders do not set out to create companies. They are on a mission to create something closer to a religion. And at some point, it turns out that forming a company is the easiest way to do so. Is Sam Altman forming a cult?
A
I've started increasingly thinking that the best way to actually understand the AI world is Dune.
B
Okay.
A
Where you create this mythology. So Paul Atreides, his mom, right. She creates this mythology around him being the coming of the Messiah. And most people who hear this myth for the first time, they don't realize that it was handcrafted to control the people and make sure that Paul would ultimately have power. And eventually, as he steps into this, he starts to forget that the myth was originally a creation and he starts to believe in it himself. Right. And I think this is exactly what's happening in the AI World with the kind of rhetoric that they use where they talk about building digital gods and digital demons literally, is that they created this mythology at some point. Someone created a mythology around the extraordinary power of these technologies and the need to usher it in carefully, very conveniently by the people that created that mythology. And now we are at a place where essentially everyone that exists in this ecosystem in Silicon Valley has forgotten or has come to believe or maybe always believed, that this is their sole purpose and this is what they need to do for the world.
B
One of the things you talk about in the book is that he. He talks about Napoleon constantly. And. And Sam sometimes, oftentimes according to his critics, lies, by the way. Sam, if you want to come on the show, love to have you on the show, but what is that? What's going on with that? What is with this obsession with. You're on Slack, but you're somehow quoting Napoleon and Marcus Aurelius and Socrates and the ancient kings of old. What is that?
A
I mean, you know, Sam Altman is a product of Silicon Valley, and we've seen this character before, right. Mark Zuckerberg is also obsessed with the emperors of old. And they literally colloquially say to one another in these spaces, how do you build this empire? We are building empire. Sam Altman says, has said when he was president of Y Combinator, which was his previous job, YC was. Is one of the most prestigious startup accelerators in Silicon Valley. He said, the thing that I was proudest of is that I built an empire. So I think this is like, they look up to these historical figures who really not just built the empire, or in Napoleon's case, failed to build his empire, but how they went about doing it. And the thing that Altman has said he really admired about Napoleon is his ability to fully understand what people want. And that is exactly how people describe Altman's superpower is he is really good at knowing what you want and then saying a story based on what you want. That makes you really, really want a piece of the future. He's selling well.
B
You got to spend some time in OpenAI's offices. What's the tea? What's the C suite? Gossip. What did you take away from that time?
A
So I was the first journalist to profile OpenAI. So I embedded within the company for three days in August of 2019, back when pretty much no one had heard sure of OpenAI pre Covid.
B
It was another time.
A
Yeah, exactly. Yeah, it was. It was a different. Yeah. And my, my. My profile, February 2020 so truly right on the cusp of us entering a different era. But at the time OpenAI was founded as a nonprofit, it was founded on the principles of being totally transparent, doing the work of advancing AI without any commercial incentive and open, sourcing it to everyone, being collaborative with everyone. By the time I got to the offices in August 2019, those were starting to change. OpenAI had restructured, so it nested a for profit within the nonprofit. It got a billion dollars from Microsoft. And I noticed when I was at the company, wait a minute, they say that they're collaborative publicly, but they're telling me internally, we need to be number one, otherwise our mission does not work right. And I thought, that's competitive. There's a tension here. And then they said, we are transparent. We're gonna open source everything. And then internally they were like, there are certain things that we cannot talk about, you cannot see. And I was like, wait a minute, that's really secretive.
B
Oh, got it. So there's a dissonance here. Yeah, yeah.
A
So what they're saying publicly to accumulate a lot of goodwill and to grease the wheels for a lot of accumulation of capital is actually not how they're operating behind closed doors.
B
Aren't all 10 companies kind of like this, where they, they all kind of talk like they're unicef, that we're here for the global good, and then until there is a bag involved, or until they're strapped for cash and they need money or investors or more capital or market share.
A
Exactly. And that was what I realized, was they had positioned themselves as anti Silicon Valley, as a new form of tech organization that was going to do things better than the previous era of Silicon Valley. And then I realized, wait a minute, no, this is actually a continuation. And now that we fast forward all the way to present day, I mean, OpenAI is one of the most capitalistic companies in Silicon Valley. They just raised $40 billion at a $300 billion valuation, which is the largest private tech investment fundraise in the history of Silicon Valley, and places them as one of the most valuable companies in the history of private startups, in the history of Silicon Valley.
B
Hiring is hard and it takes so much time. The worst is when you spend hours sorting and finding an amazing candidate, but it turns out they aren't even looking for a job right now. And look, I'm not above poaching, but I am finding folks these days. How do I say this? People these days are loyal. Well, the future of hiring looks much brighter because ZipRecruiter's latest tools and features help speed up finding the right people for your roles so you save valuable time. And now you can try ZipRecruiter for free at ZipRecruiter.com husn Over 300,000 new resumes are added to ZipRecruiter every single month, allowing you to reach out to way more potential hires. It's really a numbers game if you think about it. It's no wonder that G2 rated ZipRecruiter, the number one hiring website. Use ZipRecruiter and save time hiring four out of five employers who post on ZipRecruiter. Get a quality candidate within the first day. And if you go to ZipRecruiter.com has a N right now, you can try it for free again. That's ZipRecruiter.com Hasan ZipRecruiter the smartest way to hire okay guys, my mind was just blown. Have you heard of a Man in the Middle Attack? Apparently it's a way that a malicious third party can steal your data. Picture this, you got an evil hacker guy, okay? Big hoodie, vaguely Russian accent. He posts up near a coffee shop and creates a Wi Fi network. He names it Free Cafe Wi Fi, right? You're in there, you're sipping your $9 strawberry matcha. You mistakenly join said network. You're like, oh, free cafe WI fi. This seems like a cool hotspot. Now Hackerman over here has your bank info and your grandmother. Okay, maybe he physically doesn't have your grandma, but you take my point, right? He now definitely knows where she lives. That's why we love NordVPN, today's sponsor. Using a VPN encrypts your information, protecting it from bad faith actors, even if you fly right into their nifty little traps. When you use my link. Nordvpn.com HasanMinhaj H A S A N M I n H A J. You will get a huge discount on a two year plan plus four additional bonus months. It is risk free with Nord's 30 day money back guarantee. That is NordVPN.com H A S A N M I n H A J Hasan Minhaj or click the link in the Description below. Obviously OpenAI is huge. ChatGPT is huge. Microsoft acquiring them, huge. So they're now a player. They are here and they're probably, at least for the near future, they're here to stay. They're a huge company and clearly with a sizable market cap and huge investment, there is now this talk about artificial intelligence and These AI companies creating commercial products.
A
Yeah.
B
What does that mean?
A
Honestly, what people should know is that means they're trying to get more of your data because they are trying to figure out how to make their products, their technology, so attractive that they can continue building them.
B
Don't they already have all my data? Facebook, Apple, Netflix, Google, you got everything.
A
That's what's wild. And that's why I think we've reached the moment where we can no longer talk about these as companies and we have to talk about them as empires, is the amount of data that they need has completely eclipsed the amount of data that social media companies took from us.
B
Really?
A
Yeah. So if you just look at Meta, which has also entered the AI race, I mean, Meta literally has 4 billion users data from their previous era as a social media company, and they were using that to create their really lucrative ad targeting algorithms. Right. But even then, the New York Times reported last year that Meta was having conversations about, we don't have enough data. We need to potentially buy Simon and Schuster. We need to potentially ignore all the data privacy rules that we set up after Cambridge Analytica. We need to potentially ignore all of the copyright rules and, and just acquire more and more and more. Because with the current repository of data that we have, the 4 billion users, we cannot outcompete OpenAI, like, that is an order of magnitude, maybe multiple orders of magnitude more data that we're talking about.
B
Data is always, you know, esoteric. And when I try to talk about this with even people my age, my generation, even a generation younger, there is this acceptance of like, hey, I clicked Accept on the itunes user agreement. You got my data. Like, I never had privacy. Everything is compromised anyways from the Nest camera that's letting me know who's dropping off what, you know, to my apartment, to every single one of my photos, when I'm trying to upload a reel on Instagram, access to all photos, Sure, I need to make this carousel dope. So for me, as a human being, I'm a husband, I am a father, I got two kids. I'm like, look, you got my data, and this is to make my world better. When will you give me the robot butlers? Yeah, you got all my data. When are the robot butlers getting here? I'm talking about the people that are going to make my life better. Please cook my food. Please, please clean my baby's booty. We argue about the dishes. People come over, we got more dishes, do the dishes, fold my laundry, cook the food. Like, why I don't want another app on my rectangle of sadness. I want the robot butlers. Is that going to happen with these AGI and AI machines or not?
A
Totally not. And here's why. Here's why you should stop giving all of your data to these companies. You're ceding a lot of control and agency of your life without actually getting much in return. Now there used to be a time, I think, when there was kind of a fair trade off of, okay, I get a little bit more convenience, I get some kind of technology that I've never gotten before. I get to connect with my long lost elementary school friend. But we have reached a point where these companies, they have gotten gotten so much economic and political leverage, they're developing such a controlling influence over all spheres of society, including scientific production, including geopolitics, that they are reaching or I believe have reached an inflection point where they can start acting in their self interest with basically no consequence. And originally the bargain of giving data to companies is they will give you something in return. But these companies have reached Empire status where they don't actually have to give you anything in return anymore.
B
But what about like the way GROK can give me a demented photo of Bill Gates and Elon Musk having lunch? Like, isn't that a exchange or summarizing a very, very dense 89 page P&L report into something I can quickly make a decision on? Is that a fair exchange in your.
A
GROK is a great example of how these companies operate because in order to train Grok, Elon Musk set up a supercomputer called Colossus in the Memphis, Tennessee area and completely hijacked local democratic processes to put it up as quickly as possible and start powering it with unlicensed methane gas turbines that are now pummeling that area with huge amounts of air pollution, talking about black and brown communities. And so yes, there are things, there are certainly interesting utilities that come out of these tools and there are certainly people that actually benefit from a lot from using these tools. But the supply chain of producing these tools has already illustrated to us the logic of what's happening here, which is that these companies don't actually care about preserving people's right to even clean air in order to ultimately produce something that they are trying to use to then accumulate more data and get more money.
B
Got it. Do you mind if we back up a sec and just can you define what AGI is? Because there's a lot of people that are watching the show that listen to the show that it sounds like we're all talking about something different. What is AGI and is it ex machina or not?
A
AGI is whatever the companies need it to be if they want to sell you a convenient product. They are going to talk about AGI as the movie her and say, this is going to make your life so amazing. It's an operating system for your life. If they want to talk to Congress to ward off regulation, AGI is suddenly this mythical object that will solve climate change and cure cancer. And so AGI morphs and that's why no one really can say what AGI means because it shapeshifts based on what the companies need it to be.
B
Yeah. That's the thing I keep seeing in different settings. It can be congressional testimony. It can be a Super bowl commercial.
A
Yeah.
B
So AGI is going to cure cancer. AGI, it's going to solve climate change. Or I mean, the one that struck me, I have older parents that go, AGI is going to be able to look at your parents blood work and identify exactly what's wrong with them.
A
Yeah.
B
And so for me, I'm like, that's awesome.
A
Exactly.
B
But at its core is artificial general intelligence. A machine that just takes complex data sets.
A
Yeah.
B
Essentially unknown variables and then just crunches out the answer to that data set at its core. Is it that.
A
That's what AI at its core is at the moment. At the moment.
B
Okay.
A
Which the reason why I say that is because there are many different techniques that could be used to automate certain types of tasks that traditionally we think only humans could do. And it just so happens that right now we are in a realm where the technique is very, very much data driven data processing.
B
Right. Are you more pro task specific AI? Like are you in alignment on. Hey, I'm for a product. If it's specifically about bloodwork AI, I just literally take everybody's blood work, you know, grandma's blood work at Kaiser, and I'm going to tell you, hey, she may have a likelihood for X or Y disease. And then are you arguing against more kind of this general data scraping AGI of like, just give me everything and I'll tell you about it later.
A
Absolutely, that's exactly right. Like, these companies are trying to build everything machines. The problem with everything machines is that they can't actually do everything. They do some things for some people. Because also time and time again we've seen through the history of AI development that models have embedded biases based on the data that they're trained, based on who gets to leave data on the Internet and who gets to shape these technologies. And so ultimately, when you position your product as an everything machine, not only are people going to be really confused and start using it for things that it's actually not that good at and it could lead to a lot of harm. Like people asking ChatGPT to read their medical records. Like ChatGPT is not actually designed to be able to do that because it's not 100% accurate 100% of the time. It's a probabilistic machine. And so in the task specific approach, not only is that better for consumers in terms of it being super clear, like how are you supposed to use this AI model to make sure you get the maximum benefits from it?
B
Right.
A
It also is way better for developers to develop tools that work because then there is a very well scoped space in which they can test all of the different failure modes of this technology and continue shoring them up. You cannot test all the failure modes for an everything machine.
B
Right. So let's play devil's advocate here. If I'm arguing for the everything machine, what if I go, Karen, I hear you, I'm figuring it out. I'm iterating, as they say in Silicon Valley, I'm moving fast and I'm breaking stuff. But my North Star cardinal direction is something good and I do want to do this with good intent. What's your response to that? Is it no like good intentions is the path to hell or what do you say to that?
A
I would say that we need to look at how AI is being developed right now and the harms that it's creating right now all around the world.
B
The real world consequences.
A
The real world consequences, because that is, those are the data points. That is our evidence to understand what this technology is going to do for us in the future.
B
You know what's interesting is every empire has these things called sacrifice zones. You know, the British Empire obviously had India and Africa. Those were sacrifice zones. Here in America, we too have our sacrifice zones. IPhones made in China, we have Bengali kids making our Nikes. We're aware of this and this has existed for a long time. Who are the invisible people of the AI empire that we're not seeing right now?
A
So in my book, I go to Kenya, I go to Chile, I go to Uruguay, to Colombia, and in Kenya, for example, Kenya and Colombia, I was talking with workers that are contracted by these companies, these AI companies to do some of the worst work in the AI supply chain. So Kenyan workers, OpenAI went there, they were at A moment in their history as a company where they realized, wait a minute, we need to start commercializing. And if we start putting models that can spew anything in the hands of users, it's not going to be a huge commercial success if it starts spewing a lot of hate speech. So we need to put a content moderation filter around it.
B
Right. And content moderators are like human beings that literally have to look at things as awful as child pornography to snuff, like, really bad stuff.
A
Yes, exactly.
B
So that it doesn't end up in your feed while you're texting in traffic.
A
And Kenya is that sacrifice zone where it has long served as a backstop for the Internet of the global north. And so OpenAI shows up, they contact these workers and they ask, hey, label all of these worst, like text from the worst parts of the Internet. And AI generated text. Where we prompted an AI model to imagine the worst text on the Internet, read that day in and day out, label it into a detailed taxonomy where you have to say, is this sexual content? Is it sexual abuse content? Is it sexual abuse content that involves children? And those workers, like all content moderators, ended up psychologically devastated. And not only them, because these individuals are part of communities. There are people that depend on them. And I write about a man named Mofat Okinye who I met, who was one of the Kenyan workers contracted by OpenAI, where he completely changed his personality. He was on the sexual content team, and his wife had no idea what was going on because he had no way to tell her, oh, I'm reading sex content all day. ChatGPT hadn't come out yet. There was no conception of what this work was for. And one day she texts him and says, I like, I would want fish for dinner. He goes out, buys three fish. One for him, one for her, one for her daughter, his stepdaughter, who he loved and adored and called his baby girl. And when he shows up back home, they've left completely. All their stuff is gone. And his wife texts him, and you've changed. I don't know the man you are anymore. And she never comes back.
B
This is really heavy stuff. And what I took away from the book and what you're talking about really is this very modern, updated, but classic. I call it Critique of Capitalism and how the benefits, whether those be social or business profits, are not equally distributed. But then I got to thinking, I go have the benefits of technology and these tech companies and these empires ever been equally distributed, right? Is this the story of man? Sadly, I Haven't been able to reconcile that. How have you processed all of this?
A
To me, I cite a book in the book called Power and Progress, which was written by two MIT economists, Jerome Acemoglu and Simon Johnson. They just won the Nobel Prize for economics last year. And they say exactly this, that over the 1000 years of technology they analyzed 1000 years. Technologies been around for longer, but analyzing a thousand years of technology history, there is a consistent pattern that we see in every technology revolution, that the elites are the ones that have the money, the influence, the power to actually rally enough resources around creating certain new technologies. But it's also created in their image and consistently. There's a lot of fallout that comes from that, where people who do not live like them, who do not look like them, end up being harmed. Either their jobs are lost or worse. You know. But the thing that Silicon Valley will always tell you is that is justification for why this technology revolution is happening in the same way. And to me, it's like, wait a minute. Most of these technology revolutions that have happened in the last 1000 years were when we didn't have human rights in existence, we didn't have democracy, people didn't believe in their own agency and their right to self determination. And this technology revolution is now in an era where we have all those things. So we should want better, we should want more, and we should actually reinvent the way that technology revolutions happen so that they don't just repeat all of the terrible things that happened in previous revolutions without any rights.
B
You're going to be doing the media rounds, talking about this book, and you're probably going to hear, and you've heard this probably an online discourse, but you're going to hear it as you do the rounds. Hey, Karen, I'm sorry. The genie is out of the bottle. I call it the genie is out of the bottle.PDF paradox. Hey, guess what? Whether this company does it, another company will do it. Whether America does it, China will do it, somebody is going to do it. So you better get on board and just give in. What do you say to that? What do you say to this? Like, hey, it's already happening, boomer, so get on board. Like, what do you. I'm not saying you're a boomer. I get told that, but you know what I mean, I get told this all the time by techno optimists.
A
Yeah.
B
Of like, it's happening.
A
Yeah.
B
So do you want to be.
A
Of course they're going to tell you that because that's like the, the, the a feature of empire is they're made to feel inevitable.
B
Yeah.
A
And that is. That is part of their power, their persuasive power is you can't stop it. It's. It's an unstoppable force. But the thing is, every empire has fallen in history because they're actually really weak at their foundations. And the way that I think about how we can actually contain the empire is thinking about the full supply chain of AI development. These companies, in order to do what they do, they actually need resources from us and they need our data. They need the land's energy and water to power their data centers. They need that labor, they need talent, the AI researchers that are working within their labs. And they also need consumers to buy their technologies, to deploy them into classrooms, into healthcare, into all these different spaces. And all of these things are what I like to think of as sites of democratic contestation. There are already movements happening where artists are glazing their work when they put it up on the Internet in online portfolios such that there's no difference to the naked eye. But when an AI model trains on it, it breaks the AI model apart. And that's one form of resistance of, if you're not going to give me, if you're not going to ask for my consent, if you're not going to compensate me, you don't get this data for free. And they're already worker strikes and the Hollywood writers who are saying, we're not going to allow AI to be deployed in certain contexts. There need to be guidelines and conditions around when AI is and isn't deployed in our work. There are activists all around the world that are fighting back data centers that are just landing in their communities. And by the way, like, these data centers often come in without any transparency. Like, Meta built a data center in New Mexico under a shell company name called Greater Kudu llc. And it wasn't until the deal was done that they went, surprise, It's Meta. And so all of these residents are rising up, being like, we need more transparency. We need you to guarantee that either if you bring in a data center, you give us jobs, or you tell us that you're not going to use above a certain amount of water, a certain amount of energy, or you don't come at all. And so I think we have to sort of remember that Silicon Valley has done a really good job of creating this culture where they make you feel like everything that you own is actually what they own. But we have to remember that we actually own this data. We own these spaces. We have a right to elect officials that protect our life sustaining water. And if everyone actually remembers that and asserts hey, we want AI to be developed this way, we want it to be deployed this way, companies have to follow, they're ultimately businesses.
B
People think podcasting is just talking into a mic, but my brain doesn't shut off when I turn the mic off. So I knew I needed something to actually help me wind down. Not just knock me out, but help me wake up up feeling like a functioning human being. And that is when I found Beam's Dream Powder. Dream is an all natural sleep blend made with science backed ingredients like reishi, magnesium, L theanine, apigenin and melatonin. All designed to help you fall asleep, stay asleep and wake up refreshed. And unlike other sleep aids, there is no next day grogginess. Just refresh. Real deep sleep that helps you actually feel good in the morning. Since adding dream to my routine, I am sleeping through the night without tossing and turning. I wake up actually feeling refreshed, not groggy, not exhausted, just ready to take on the day. And I'm not the only one. 92% of users surveyed reported better sleep and waking up refreshed. Plus, for a limited time, Beam is giving my listeners their best offer yet. Up to 40% off. Go to shopbeam.comhasan and use code H-A S A N at checkout. That's shop B E A M dot com hasan and use code hasan for up to 40% off this episode of Hasan Minhaj Doesn't Know is brought to you by booking.com booking yeah, no matter how large, small or demanding your family is, booking.com will help you find the perfect place to stay. You'll have your pick of millions of vacation rentals and hotels across the United States of America. I am on record saying my parents are welcome to live with me eventually, but unfortunately they thought that meant joining us on vacations now. So my family is looking for options with an in law suite for mom and dad also on our list, a backyard. My kids love the space to play, but mostly we can't trust them with ice cream inside. Finally, it needs to be pet friendly. My daughter's bunny, Maximo, wants to see the world and find himself. Yeah, I know it's a lot, but I'll compromise on the kitchen. Let's be real, your boy can barely operate a microwave. And if our particular family can find all our perfect stay on booking.com, anyone can find exactly what you're booking for. Booking.com, comma, booking. Yeah. Book today on the site or in the app. Is there a central place where collective action can gather around a common almost set of human rights in the face of this AI revolution?
A
I think not necessarily central place, more of a distributed many, many places. You know, if you're a parent, you are a parent.
B
I am parent.
A
The fact that your school is implementing certain technologies that are going to affect your kids, like build a parent group or a parent teacher coalition and be like, hey, let's actually talk about this before you start, you know, using facial recognition on my kid, before you start turning my kid into a QR code. Let's actually set some guidelines around what kinds of technologies we do want to use and don't want to use. If you are going to your doctor' office, like, ask what AI do you use? And can I opt out and maybe get together with other patients, other the nurses in the office and ask, like, can we create guardrails around that too? When you go to work, your job is almost definitely now talking about, like, how do we adopt AI? What is our AI policy? Get together a group of co workers, talk to your boss, like, let's have a meeting about this.
B
So this is really rubber meets the road. And what's funny is, you know, sometimes people go, hey, listen, collective action, that's a privilege. I got to pay the bills. So let's actually talk about the bills. Your job. Yeah, will I take my job? What's your stance on that? What's going to happen?
A
I think I can absolutely take people's jobs because of the way that Silicon Valley has started pitching the technology to try and earn back all the money that they're spending, which they're going to executives and saying, we can make your workforce a lot cheaper by giving you these AI tools. But there was a really funny headline recently where a company declared, we have entered the AI era and then fired a bunch of people to replace them with AI tools. And then a few weeks later, they were like, oops, these AI tools are not good enough. Can you please come back? Right, so the reason why AI is going to automate jobs is not always going to be because the AI tools are actually up to snuff. It's because people are putting the cart before the horse and just getting rid of workers being pulled into this allure that AI is the solution.
B
How should I think about it? Because I've heard two different versions and it sounds like the story you just told me is simultaneously both AI will take your job and there will be Corporate downsizing because. Because of it. But then simultaneously, it may create new jobs because these systems, these AI models, are slightly or majorly flawed. So what is it? Is it, is it a little bit of both?
A
It is going to be a little bit of both because at the end of the day, these aren't actually everything machines. The companies have lists of economically valuable tasks that they are trying to design these systems to perform particularly well. And I had a trove of documents that I had access to of the tasks that they were trying to, to specialize these models in, in the book. And they try to target the most lucrative industries, entertainment, media, finance, healthcare, because those are the industries where they can show up to the executives who pay the big bucks. And so that is where these models might get really good. And, you know, these companies are investing a lot in automating coding, which is particularly something that AI is good at. It's super computational. And so there will be certain things that will certainly, like AI models, will be technically competent at replacing a human. There will be many other things that it will not be, but that won't necessarily have a bearing on whether that person keeps their job anyway, because ultimately it's not actually AI taking your job, it's humans. It's an executive deciding that your job is now redundant.
B
Interesting. I hear this with the education system as well. There's this idea of like, well, AI can, it can code, it can text, it can write, it can summarize, and it can analyze complex data sets. You might as well be illiterate. What do you say to that? For some reason, I firmly disagree, but what do you say to that? Is it complex?
A
I mean, yeah, no, I do firmly disagree. I mean, this is like, in order for democracy to function, I mean, this is this. I'm getting really high level here. Sure.
B
But no, no, let's get philosophical here.
A
In order for democracy to function, we need critical thinking skills, we need agency, we need to be able to be independent from the crutches that Silicon Valley is trying to sell us, you know, and so ultimately, I mean, the best thing is for technology to be assistive to people, not to totally gouge out their brains.
B
Right. Do you feel, unfortunately, as someone who, you know, you write and you cover stories like this for a living, do you see it frying our brains?
A
I do see it frying a lot of people's brains. But the thing that has been really amazing is at the same time, there is now more conversation than ever before about AI and whether it's good or whether it's bad. What do we want out of it? Like, I've been covering this since 2018, and this is the first time that, I mean, we are having actual global conversations about the ethics in the Zeitgeist. Yeah. And so that is, I think, a sign that it's going to take a lot of hard work to readjust the vehicle that's bulldozing its way in one direction.
B
But we're going to get there in 2023. This conversation around AI really took center stage in my industry around the WGA and the SAG. Strikes. There was strikes, and then there were negotiations. How do you think the strike went, and how do you think it's played out since? What. What have you noticed that's good or bad about what went down?
A
It definitely showed that collective action is an extremely important mechanism to hold on to for demanding certain protections against AI. But the specific details of what they negotiated, I couldn't say how it actually has resisted or been resilient under the test of time. But I think, to me, it was really amazing that they actually got the executives to the negotiating table to actually put AI up for discussion. And that is something that every industry, any worker anywhere, can learn from.
B
I'll be starring in the new Walt Disney picture Tron Aries. Tron Ares in theaters October 10th. I let Disney scan my body for that movie.
A
Did I up under what rights are they allowed to use it?
B
I didn't read the contract then.
A
Yeah, you did. Sorry.
B
I did ask for free tickets to Disneyland. That's all I did. And then I walked in.
A
I mean, you know, if it works for you, if that's a fair trade.
B
As I scrolled through the thing, I. I almost. I was like, well, is there just an AI thing that can summarize what I'm about to sign?
A
But, you know, I think a lot of people are starting to feel the way that you're feeling in this moment of, wait a minute. There are some things that I did in the past that maybe I should reconsider how I do in the future? And I think that is exactly what's gonna help.
B
So what is the alternative? How should we look at the next 5, 10, 15, 20 years while I'm, you know, still negotiating my lower back pain? And I do have a modicum of sanity for real. I think about this, like, the next 20, 25 years of my life. What are the alternatives to what we currently have, and what should we do?
A
Collective action also investing in different types of AI technologies. This specific paradigm of growth at all costs Scale at all costs. That's coming out of Silicon Valley. With respect to how they develop AI models, we don't need to do that. There's an amazing organization called Climate Change AI. It's a nonprofit that is dedicated to putting out white papers and doing research on all the different AI tools that could be used to help with fighting the climate crisis. Pretty much all of their recommendations actually have nothing to do with generative AI. So for example, they recommend optimization models to help better integrate renewable energy into the grid, because you need to be able to predict how much renewable energy generation there is going to be when the sun shines, when the wind blows, and then you need to be able to figure out how to actually distribute that effectively among all the people that are demanding that energy. That's a problem that AI is perfect at and is just one little piece of the general resiliency climate change equation.
B
Right. And that's task specific AI.
A
That's task specific.
B
You are designed to do this.
A
Yes. And we've also seen, you know, the Nobel Prize was awarded to a team that created AlphaFold@DeepMind. AlphaFold helped. It was also task specific AI that helped with predicting protein structures from their sequences, which is really critical for drug discovery, for understanding disease. And so that was a really great advance in AI and healthcare. That has nothing to do with generative AI. And I think we need to invest in more of these approaches by ultimately asking, what do we need as a society to live in a sustainable, equitable future? We need, we need our rights, we need clean air, we need clean water, we need better healthcare, better education, we need to not have an environmental crisis. And then think about, well, how do we integrate any technology, not just AI, in service of that? Rather than suddenly ask how we serve.
B
Technology, I have been thinking about the way, even collective action, that's important. But I've been thinking about the way, how does the government get involved? And, and the, the toughest part with government, you know, this is if you look at Congress and Senate, it's a retirement home.
A
Yeah.
B
I mean, Chuck Grassley's in his 90s. I think he just maybe, fingers crossed, knows what imessage is. How can our government officials hold any technology company accountable when you have an analog government trying to compete with an AI revolution?
A
We obviously have a huge vacuum of leadership at the top in the US right now. But the beautiful thing about democracy is you can also have leadership at the bottom. And we cannot actually wait around right now for policymakers and regulators to move.
B
So we gotta move the, the Scary thing that I think about all the time is when I read history, when you look at any company that has been able to acquire exorbitant amounts of wealth and deliver returns for shareholders, there's always what's written legally. Here are the 12 rules. Right. And they just find rule number 13, that euro steps.
A
Yeah.
B
Past what's legal. And once you add a new rule that's technically not illegal, you then conflate the legal with the ethical. Hey, I'm not breaking the law, so it's totally fine.
A
Yeah.
B
You clearly see that in finance that happens all the time. They are masters of understanding what is legally allowed. And just let's add an addendum or two that just works around that. How do we get ahead of that? How do you play defense against that?
A
I. There's some really interesting case studies in history of this collective action helping to do that thing. Like when you talk about the fashion industry, I mean, there were some serious environmental and labor harms coming out of the fashion industry.
B
Right.
A
And it was. None of it was illegal, but there was such a huge movement among consumers of, wait a minute, we don't want to buy clothes that are created in buildings that are collapsing on people and leaving, leaving them dead.
B
Right.
A
We want to buy sustainable, ethically sourced clothes where workers are paid what they're actually the value that they create. And it actually created entirely new markets for sustainable fashion, for ethically sourced fashion. Yeah. So the solution at the time wasn't like no one wear any clothes. The solution was to shore up the supply chain and create enough pressure that there are new markets born from the consumer demand. And I think there are, you know, there are many, many other examples of different industries that have led to that kind of transformation because people wanted better. They wanted better than the law.
B
Are you. Are you familiar with Cassandra from Greek mythology?
A
Yeah.
B
Okay, so if for those of you that aren't aware, in Greek mythology, Cassandra. Cassandra was a prophet whose prophecies always came true, but were never believed by the people. So she would tell you what's going to happen, and it would fall on deaf ears. Do you feel like you are a Cassandra when it comes to AI?
A
No, actually, I've been amazed by how many people I talk to around the world who are like, oh, yeah, this is exactly what I'm feeling. And that has been amazing.
B
This has been been an amazing conversation. Karen, I love chatting with you. Do you have any final thoughts that you want to leave our audience with? But this has been so rad. Thank you for doing your work, just.
A
That I'm your number one fan and everyone should continue watching. Hasan Minhaj oh, thank you.
B
Thank you, Karen. Appreciate you being on the show.
A
Thank you so much for having me.
B
All right, mom, you got a competitor right here. Sorry, Sima, this was so lovely.
A
Thank you. Thank you.
B
If you haven't subscribed to Lemonada Premium yet, now's the perfect time. Because guess what? You can listen completely ad free. Plus you'll unlock exclusive bonus content like Halle Berry on how to be a good partner during menopause or Mehdi Hassan on the dumbing down of media clips you won't hear anywhere else. Just tap that subscribe button on Apple Podcasts or head to lemonadapremium.com to subscribe on any other app. That's lemonadapremium.com. don't miss out.
Podcast: Hasan Minhaj Doesn't Know
Host: Hasan Minhaj (186k Films)
Guest: Karen Hao (author, Empire of AI)
Date: September 10, 2025
Hasan Minhaj sits down with journalist and author Karen Hao to dissect the realities and myths surrounding Artificial Intelligence—specifically, the impact of AI on jobs, power, and society. Using Karen’s new book Empire of AI as a touchstone, the conversation explores how giant AI companies mirror historical empires, the true state (and hype) of AGI, the labor and ethical consequences of AI development, and what regular people can do to reclaim agency.
Karen compares Silicon Valley’s mythology—especially at OpenAI—to the world-building in Dune:
“At some point, someone created a mythology around the extraordinary power of these technologies … now everyone in this ecosystem has come to believe that this is their sole purpose.” (10:43)
Sam Altman & Empire Building:
"Sam Altman has said ... the thing that I was proudest of is that I built an empire. … His superpower is he is really good at knowing what you want and then saying a story based on what you want." (12:30)
Hasan and Karen’s conversation decodes both the hype and the real risks around AI, revealing that the “empire” of artificial intelligence is not all-powerful or inevitable. Instead, resistance and reform are possible through local organizing, consumer activism, and demand for accountability. Far from being a dark prophecy, Karen’s perspective is hopeful—if society chooses to assert its own agency and values.
Final Thought (Karen):
“Everyone should continue watching Hasan Minhaj ... and remember that you actually own this data. We own these spaces. We have a right to elect officials that protect our life sustaining water. And if everyone actually remembers that and asserts hey, we want AI to be developed this way, we want it to be deployed this way, companies have to follow, they're ultimately businesses.” (56:19)