Loading summary
A
We are allowing the tech industry to consolidate this extraordinary degree of resources unlike anything ever before. I mean, we thought that they were already powerful during the social media era. In the AI era, the amount of resources and the amount of influence and domination that they now have is of a fundamentally different degree.
B
On this week's more to the story, tech journalist Karen Howe sounds the alarm about the rising risks to our planet from the growth of artificial intelligence. Stay with us.
C
Today's episode is sponsored by Strawberry Me. Let's be honest. Are you happy with your job? Like, really happy? The unfortunate fact is that a huge number of people can't say yes to that. Far too many of us are stuck in a job we've outgrown or one we never wanted in the first place. But still, we stick it out and we give reasons, like, what if the next move is even worse? Or I've already put years into this place, and maybe the most common one. Isn't everyone kind of miserable at work? But there's a difference between reasons for staying and excuses for not leaving. It's time to get unstuck. It's time for Strawberry Emmy. They match you with a certified career coach who helps you go from where you are to where you actually want to be. Your coach helps you get clear on your goals, create a plan, build your confidence, and keeps you accountable along the way. So don't leave your career to chance. Take action and own your future. With a professional coach in your corner. Go to Strawberry Me Future to claim a special offer. That's Strawberry Me Future support comes from Dakota State University. What happens when you unite dynamic minds with one revolutionary vision? A rising force on the cutting edge of everything from a little university on the prairie to a cyber powerhouse of the plains. We're proving the nation's best cyber can come from where you least expect it. Now we're molding the future of cyber, AI and quantum computing one graduate at a time. See what Dakota State University is building next@dsucyber27.com.
B
Hey, this is Al. And I'm sure it is no surprise to you that President Trump doesn't like us very much. He called the press the enemy of the people. Credentialed journalists have been banned from press briefings just for asking tough questions. Trump personally sued news networks demanding billions. And now, at his urging, Congress has voted to gut all federal funding for public broadcasting. And I think I know why. I think we all do. It's because real journalism brings sunlight, scrutiny, accountability. When power feels threatened, it lashes out. And that tells you just how vital independent reporting is right now here at Reveal, we don't answer to billionaires or politicians or special interests. We only answer to you, our listeners. But we can't do this alone. Stand with us. Support fearless independent journalism that refuses to back down. Donate today, just visit revealnews.orgfearshell Again, that's revealnews.orgfearshell. thanks. This is more to the story. I'm Al Edson. It's difficult these days to escape the reach of artificial intelligence. Maybe you've played around with it to answer a random question or relied on it at work to accomplish some routine task. Open social media and you'll find AI generated memes everywhere. Even the President has shared fake videos created by AI. Much of the current AI boom is thanks to the advancements of OpenAI's ChatGPT. That's the conversational chatbot that can answer questions, generate images, summarize entire books. It'll do just about anything you throw at it. It's pretty remarkable and honestly, a bit terrifying. Back in 2020, Karen Howe was the first journalist to profile OpenAI when many in Silicon Valley laughed the company off. Today, no one's laughing. OpenAI is valued at an estimated half a trillion dollars. The entire industry is accelerating rapidly, and the Trump administration is actively loosening industry regulations. Earlier this year, Karen released a new book, Empire of Dreams and Nightmares. In Sam Altman's OpenAI, she argues that while some like to warn of AI's sci fi like threats, the real risks of artificial intelligence are already playing out today. Karen, how are you?
A
Good, how are you?
B
I'm good. Thank you for joining us today. I'm hoping that you can walk me back from the ledge because I am so worried about my namesake AI. So many times someone will write AI and I think they're saying ow and no, dummy, it's AI. But no, I tend to use AI often. I'm dyslexic, so I tend to use OpenAI or ChatGPT to proofread, especially like if I'm sending out emails or whatever. I just wanna make sure that all the dyslexia is taken out. And because I use it for those purposes, I see like what it can do. And to be honest, it scares the hell out of me. So for me, when I work with ChatGPT, I am wondering if we are close to, for lack of better terms, if we are close to the singularity where everything changes and the world is no longer recognizable from what it used to be. And so that's, I think, the question that I'm thinking about a lot is are we nearing the Singularity, where everything is going to shift because of OpenAI?
A
It's such a good question because there's so many different ways of answering it. Like, one of the challenges of the AI discipline in the AI industry is that there's sort of a lack of definitional clarity about what the milestones are for AI progress. And part of that is because the idea of artificial intelligence or recreating human intelligence in computers, or we don't have any scientific consensus around what human intelligence is, and then we don't really have any scientific consensus around, like, what it would mean if we actually did accomplish this goal of fundamentally simulating it in digital technologies. And so, on one hand, if we were to define the Singularity as that kind of moment, then there's a raging debate within the AI research world about whether AI can actually even ever get there. And 75% of scientists who study this would actually say that current AI models and the existing techniques for advancing AI are probably not going to get us there, if AI can ever do that at all. But if you were to define the Singularity as just the technology having a kind of fundamental transformation on the way that we work and live and everything, I think that's already happened. Everyone is already grappling with the impacts of AI in various different ways on the multifaceted landscape of their lives.
B
You said something earlier that really struck me. There's no scientific consensus around human intelligence, which I would argue, given the world we are in today. Yeah, I don't know if human intelligence is such a thing anymore. But if we can't really judge human intelligence, if we can't. If scientists, people who have more intelligence than me, cannot, you know, say specifically what human intelligence is, then I imagine it's. It's near impossible to say what computer intelligence or artificial intelligence is.
A
There's a really dark history around attempts to quantify human intelligence. There's basically never been any endeavor to quantify or rank human intelligence without some kind of insidious motivation behind it. So, in general, yeah, this entire idea of recreating human intelligence is actually quite fraught. And also one of the challenges that we're facing now is the AI industry has become so resource rich that most of the AI researchers in the world now are bankrolled by the companies that are ultimately trying to just sell us their technologies. And there has become this distortion in the fundamental science that is coming out of the these researchers in terms of understanding the capabilities and limitations of AI today. In the same way that you would imagine climate science would be deeply distorted if most climate scientists were bankrolled by the fossil fuel industry, you would just not get an accurate picture on the actual climate crisis. And so we are not actually getting an accurate picture on the capabilities of these systems and all of the different ways that they break down, because a lot of these companies now censor that kind of research or don't even allow that research to be resourced. So there's never any investigation along those lines.
B
You've been an AI insider for a while now. You've got a degree in mechanical engineering from mit. When did you start reporting on AI? And are you surprised at how quickly the field has grown?
A
Yeah, I started reporting on AI in 2017, 2018. And in 2018, I took a job at MIT Technology Review to cover the fundamental AI research that was happening in academia and within corporate labs. And at that time, the AI industry really wasn't. It didn't really exist yet. I mean, there were certainly efforts to commercialize the technology, but it was primarily back office tasks. It was things like Google improving its search engine or Facebook improving its recommendation algorithm, but not, not actually consumer AI products where you could talk with the AI model or type directly into this chat for it to generate images. And I have been really surprised. I mean, it's so interesting because OpenAI, I was the first journalist to profile OpenAI back in. I embedded within their office for three days in 2019 and then my profile published in early 2020. And at that time, it's hard to sort of explain to people today, OpenAI was sort of the laughingstock of the AI field and of the tech industry. Like, people did not take that company seriously, in part because the approach that they said that they were going to take to AI development, which is ultimately what has come to pass, was they were just going to take existing AI techniques and technologies and throw more data on it and use larger supercomputers to train it. And people didn't think that that would lead to advancements that we see today. They thought this is an intellectually lazy approach. That's not real research. That's not. We're not actually investing in real breakthroughs here. But in the end, it turns out that there is a lot of interesting things that happen when you do that. And because they took that approach, they were able to do it very, very quickly, far faster than anyone else could have imagined. So, absolutely, I would have never predicted that we would be where we are.
B
I want to talk a little bit about, specifically about OpenAI. So it started off as an altruistic organization with a unique structure created by Sam Altman. Can you talk me through that a little bit? Like how did start. What happened with the rise and fall and rise again of Sam Altman? That whole story?
A
Yeah. So OpenAI started as a nonprofit at the end of 2015 and it was co founded by Elon Musk and Sam Altman. And the origin story for how the two of them came together is that Musk was deeply, deeply concerned about the fact that Google was starting to develop a monopoly on AI talent at the time. They had acquired some of the top AI researchers in the world through the buying out of three researchers. That basically started the first AI revolution, AI industry revolution and as well as the application, the acquisition of DeepMind, the London based AI lab. And Musk thought if Google is going to have a controlling influence on this technology, and Google is a for profit company, that could potentially lead to AI development going very sideways. And what Musk meant by it going very sideways is not that it would then be unsafe for consumers, that it might have bias and discrimination, that it might have huge environmental impacts. What he specifically meant was AI might develop consciousness and then turn against humans and destroy everyone. Which is a very sci fi premise that has taken hold in a lot of parts of the AI industry. Altman, he was the president of Y Combinator at the time and he is a very strategic person. He essentially starts cultivating this relationship with Musk, telling him that he agrees with all of these concerns. He also is worried about rogue AI. He's also worried about Google having a control on this technology. And he essentially proposes to Musk after a bit of a courtship, well, why don't we actually create an organization that is antithetical to everything Google stands for. It'll be a non profit instead of a for profit. It'll be highly transparent instead of secretive. It'll be very collaborative work together with all of these different entities to ensure sure, ultimately that this technology goes well for humanity. And so they create this organization. But key to this origin story, which I sort of didn't quite articulate until years later, is there is a very egotistical element to that origin. Right. Musk and Altman were basically saying we're the good guys, we want to be the ones that create AI in our image. And so that it goes well for everyone in humanity. And so it sort of was a natural consequence that then 1 1/2 years into the organization they, when they were thinking about how do we actually make sure that we dominate, not Google. That they came to the conclusion that they needed to advance AI faster than Google, faster than anyone else. And the best way or the. The. The way to guarantee that was to take these existing techniques, throw extraordinary amounts of data and computational resources at it. And that meant that they just had to build larger supercomputers than had ever been built in history. And then suddenly, the bottleneck was cash. A nonprofit structure didn't work anymore. They needed some kind of for profit to raise that level of cash. And OpenAI started its transition away from nonprofit to what is now the most capitalistic organization today, nearing a 500 billion doll. And basically, since then, Altman led the organization. Musk left. And there have been these allegations that have continued to follow Altman through his time at Opening Eyes as well as through the rest of his career, that he has this slipperiness about him where he tells people what they want to hear the same way that he told Musk what he wanted to hear, to sort of extract what he needs out of them. But then he might shift course at any moment to just continue doing what he ultimately wants to do. And, like, no one can quite ascertain what it is that is his actual end game. But that basically then led to this very dramatic moment of reckoning where a lot of people suddenly lost trust in his leadership as the head of OpenAI, the board decides to fire him. But then, because Altman is just so good at fundraising and so good at amassing the kinds of resources that OpenAI foresaw themselves needing to continue perpetuating what they want to do, that then employees rallied around bringing him back again because they were concerned that without him, they wouldn't get access to those resources. And now he's, you could argue, stronger than ever.
B
Yeah. And him and Elon seem to have beef with each other. I've seen both of them, you know, making snide remarks about the other. Is that because of the breakup with OpenAI?
A
It is so at the time that Musk decided to part, it was amicable. But as OpenAI continued to succeed, more and more and more, Musk became more and more frustrated that he was not part of that success. And he then started to feel that he had really been tricked by Altman because it was originally Musk's reputation and Musk's money that allowed OpenAI to establish a strong foundation for its later success. And so that is. That is why they're kind of at each other's throats these days.
B
I want to talk to you about your reporting that focused on Africa and South America. So early on in the development of ChatGPT, the company hired people there to work on data annotation and tell me about the work they were doing. And, you know, do you think they were exploited?
A
Absolutely, they were exploited. So, yeah, everything that an AI system can do, it's not because the AI system learned how to do it completely on its own. It is because there were human beings that taught the system how to do that. And that means that in order for the entire AI industry enterprise to function, they need to hire all huge teams of workers to teach, you know, chatgpt how to chat. The fact that it can even chat was a design decision. And the worker. There were workers that had to show ChatGPT, this is what dialogue looks like. This is how humans converse. One person says one thing, another person responds with related information. And there's also, they have to do, you know, content moderation to make sure that the chatbot won't spew crazy, racist, hateful, or other abusive content out of it. Most of that labor initially came from the global south because the AA industry was looking for the cheapest possible labor in the world. And one of the things that I talk about in my book is that initially they thought, oh, we want to go to English speaking countries. So they went to places like Kenya, they went to places like the Philippines that have a history of colonialism and therefore speak English and also an understanding of American culture. And. And now in the generative AI era, when we see the need now for content moderation on these chatbots, I mean, we are repeating all of the same types of exploitation of content moderators in the social media era. So I went and met workers in Kenya that OpenAI contracted there to build their content moderation filter. And those workers ended up with extreme ptsd. Their personalities completely changed. They went from being extroverted or loving individuals to highly social, anxious, socially anxious, highly isolated.
B
And one man, what was it that made them that way? What in the work shifted their personalities like that?
A
They were wading through reams and reams of text that represented the worst content on the Internet. Because these chatbots, OpenAI decided at some point that they were gonna train these chatbots on the entirety of the English language Internet. So they're scraping all of this stuff willy nilly. And the datasets have grown so large that they don't even know what's in there. Like it would take too long for them to manually audit what's in there. So there's just a bunch of incredibly awful material that's in there that is never taken out, which means that when you train an AI model, that AI model is then at risk of being regurgitating all of that awful stuff, which would then make their consumer product highly untenable. And so they took different examples, thousands and thousands of different examples of just the awfulness of that. Both, both examples that they were finding on the Internet as well as AI generated examples where OpenAI was prompting its own models to say, imagine the most awful thing that you could imagine and then giving it to the workers so that they had to read and then put into a detailed taxonomy. Exactly what was the badness of that content? Was this violent content? Was this sexual content? Was it sexual abuse content? Was it child sexual abuse content? And reading that kind of stuff for like eight hours a day, you know, every, every day of the week just completely deteriorated their mental health. And not just their mental health, but these people belong to communities and when they break down, the people who depend on them also break down.
B
When we come back, Karen examines how the growing energy needs of AIs, supercomputers are threatening the planet.
A
So there are people around the world that are actively competing with computers for their life sustaining resources.
B
But before we get to that, my friends, we have some business. You see, we here at Reveal would like to reach as many ears as possible. And one really effective way to do that is through social media, particularly on Instagram. So can we make it Instagram official? Can you do it for the gram for us? Please find us there at Reveal News and be sure to like and share our posts with your friends, add us to your stories, comment on the show, tell people how handsome the host is because I mean, you know, he's, he's kind of a stud. All of this helps us grow. Thanks so much for being a part of what we do. We're all in this together and we couldn't do this show without you. Okay? So don't go anywhere. More with Karen Howe in a moment.
C
This show is supported by Uncommon Goods. Uncommon Goods makes holiday shopping stress free and joyful with thousands of one of a kind gifts you can't find anywhere else. So shop early, have fun and cross some names off your list today. To get 15% off your next gift, go to UncommonGoods.com revealed. That's UncommonGoods.com revealed for 15% off. Don't miss out on this limited time offer Uncommon Goods. We are all out of the ordinary.
A
Hello listener.
B
My name is Najeeb Momini and I am a producer here at Reveal. Reveal is a non profit news organization and we depend on support from our listeners.
A
Listeners like you.
B
Donate today@revealnews.org donate. It helps fund the stories that we tell and helps me feed my cat. So thank you. This is more to the story. I'm Al Ledsen and I'm back with tech journalist Karen Howe. So I just want to talk a little bit about the cost of AI because I think it's really easy to forget when you're on your computer, you've got a screen up and you're asking ChatGPT or any other AI model some questions that maybe five years ago we would have popped it into Google and not gotten a great answer or gotten in the area of what we were looking for. And now you can pop it into ChatGPT and really kind of fine tune what you're looking for. That being said, that is a very easy thing to do. Then recently I found out the cost of that. Can you kind of walk me through what the cost of that is? Both, you know, financially, but, but really I'm thinking about the environmental footprint that's left behind by, by AI overall because.
A
Of the amount of energy that is used to first develop these systems originally and then to deploy them at scale. McKinsey recently had a report that projected that based on the current pace of data center and supercomputer development, to support this, we would need to add two to six times the amount of energy consumed by California onto the global grid in five years by 2030. And most of that will be fossil fuels. Because these data centers have to run around the clock. They cannot just pause when they're, they're serving up these models or training these models. And most, you know, like the XAI with grok, there have been a lot of phenomenal journalistic investigations on the supercomputer that they've been using in Memphis, Tennessee that is just being run on 35 unlicensed methane gas turbines that's pumping thousands of pounds of toxins into working class communities in Memphis, Tennessee who have long had a history of environmental injustice where they are unable to access this fundamental right to clean air. And then that's just the energy and air pollution side. There's also the fresh water side of things where most of these data centers are cooled by water. You can also cool them with just energy, basically running massive air conditioning units. But it is way more energy. So companies usually opt to cool with water because it's more energy efficient, but the water has to be fresh water because any other type of water leads to corrosion of the equipment or bacterial growth. And so Bloomberg recently had an investigation showing that 2/3 of these data centers are actually going into areas that already don't have enough fresh water resources for the human population. So there are people around the world that are actively competing with computers for their life sustaining resources. And one of the communities that I reported on in my book was facing this crisis in Montevideo, Uruguay, where residents were facing historic level of drought to the point where the Montevideo city government was actively mixing salt water into the public drinking water supply, just so people could have something come out of their taps. And for people who were too poor to buy bottled water, that is what they were drinking. And women were having higher rates of miscarriages, people with chronic illnesses were having exacerbated symptoms. And it was in the middle of that that Google proposed to build a data center that would cool with their freshwater. And you know, I point to the global south and these communities there, but this is actually also happening in the US as well. There are plenty of communities that are now struggling and trying to figure out how to essentially prevent their freshwater resources from being taken by silicon infrastructure.
B
When you say that, it feels like the scale of the problem is so big and yet we're not really talking about it. When we talk about AI. When we talk about AI, we tend to talk about the benefits and the questions of whether it's going to turn into Skynet and we'll have a terminator monitoring our streets next week. But really the insidious part of it is that it is sucking up natural resources that human beings need and giving it to a machine.
A
Right? Yeah. A lot of the discourse around AI risks and dangers is ultimately a distraction to the real risks and dangers. We do point to these sci fi like scenarios and in part because Silicon Valley keeps trumpeting that as the scenario. And it is a very convenient one for them to trumpet. Because then if people are worried about existential risk and Skynet appearing, they're not going to worry about the climate. But to me, the real existential risk is that we are literally leading to the over consumption of our planet. We are leading to the enormous exploitation of labor in the production of these technologies, as well as the application of these technologies and the economic fallout that it could have when it starts to automate away a lot of people's jobs. And we are allowing the tech industry to consolidate this extraordinary degree of resources unlike anything ever before. I mean, we thought that they were already powerful during the social media era, in the AI era. The amount of resources and the amount of influence and domination that they now have is of a fundamentally different degree. We are not actually getting innovation that is in the public interest and we need to hold those companies and the people at the top accountable in order to actually get to a point where we do get technology in the public interest.
B
I guess looking at the world today, specifically looking at the United States today and where we are politically and also socially, do you think that's possible, the idea that the government can come in and regulate and, you know, really hold these companies accountable?
A
I think it is possible, but it would not be from the government. So I, I used to say when the government was more functional, that that was sort of the end game, that we really wanted to have the governments do top down governance and implement legislation, regulation and so forth. Now I very much believe that we need to shift to bottoms up governance. We need, when there is a crisis of leadership at the top. The beautiful thing about democracy is that you can still have leadership from the bottom. We have seen, you know, artists and writers suing these companies, saying you can't just take our intellectual property and just decide not to credit or compensate us. And they are using now litigation as a form of trying to create new mechanisms of governance around data use, around copyright law. We've seen so many communities around the U.S. and around the world that are pushing back against unfettered data center development to support the development and deployment of AI. And actually just recently there was a huge victory in Tucson where the residents there, they successfully blocked what was reportedly going to be a hyperscale data center project from Amazon. Not because they said, we just don't want data centers at all, supercomputers at all. But specifically they said, we cannot accept a project like this that is going to consume a lot of energy, it potentially will consume a lot of our fresh water resources and have absolutely no transparency around who is using this infrastructure, who is building it, who's using it, what kind of energy and fresh water it might be using, how it might hike up our utility bills, how it might pollute our air quality, if the these facilities are going to be run on natural gas or other types of fossil fuels. And they were basically demanding like, yeah, it can't just be a top down, like we have no say. It has to be a democratic process of engaging with the community. What are the terms under which you would want this data center to be placed in? We're also seeing students and teachers starting to have discussions within classrooms, within universities about a more nuanced AI Governance policy that's in between everyone use it or no one use it at all. And all of these different types of discussions protest pushback. I see as different forms of democratic contestation along the sites of the AI supply chain that are really actively pushing the tech industry to start to respond to the fact that they can't actually just do whatever they want without any resistance.
B
You've mentioned all these concerning aspects of AI. I'm curious how worried you are personally about the growth of this technology and where it's going.
A
I really want to emphasize that the thing I'm most worried about is the unfettered expansion of Silicon Valley's model of AI development, their approach to creating these large scale, extremely consumptive AI models. But there are so many other AI technologies that actually do not have any of the problems that we talked about, do not have the need for content moderation, do not have the huge environmental and fresh water costs. And those are smaller task specific models that are meant to tackle a very specific challenge that actually lends itself to the computational strengths of AI. So one example of a system like this is DeepMind's AlphaFold, which was a system that was able to predict with high accuracy how an amino acid sequence would fold into a protein, which is a very, very, very important first step for then accelerating drug discovery and for understanding different diseases. And it ultimately won the Nobel Prize in Chemistry last year. That system is far flung from ChatGPT. It was not trained on the Internet. It was trained on just amino acid sequence and protein folding data. And it did not need massive supercomputers. It just needed a few computer chips to create that type of AI technology. And so I am extremely pessimistic about what would happen if we allowed Silicon Valley to keep building the technology the way that they want to. I think that ultimately they would consolidate so many resources, so much power that it would be the greatest threat that we've seen to democracy to date. At the same time, I am extremely optimistic about the other types of AI technologies that are available to us and that if we are to invest more in those other AI technologies, we really can get to a place where AI is actually serving our needs, serving society, rather than us being served up to the tech industry.
B
Karen Howe is the author of the new book Empire of AI Dreams and Nightmares in Sam Altman's OpenAI. Karen, it was a pleasure having you on. Thank you so much for talking to me.
A
Thank you so much, Al.
B
That was Karen Howe, tech journalist and author of Empire of Dreams and nightmares in Sam Altman's OpenAI. If you liked it, you should check out our previous More to the Story Episode is AI pushing us Closer to Nuclear disaster? I sat down with physicist Daniel Holz, who helped set the hands on the famed Doomsday Clock, which is closer to midnight than ever. And just a note, we here at Reveal have joined a lawsuit against OpenAI and Microsoft challenging the company's use of our journalism to train its AI models. The lawsuit claims the company's committed copyright infringement and is ongoing. Lastly, just a reminder, we are listener supported. That means listeners like you, you can help us thrive by making a gift today. Just go to revealnews.org gift again, that's revealnews.org gift and thank you. This episode was produced by Josh Samburn and Carl McGurk. Allison James west edited the show theme music and engineering, helped by Fernando My man Yo Aruda and Jay Breezy. Mr. Jim Briggs, I'm Al Letson and as you know, let's do this again next week. This is more to the story.
A
From PRX.
Reveal – “The Race to Stop AI’s Threats to Democracy”
Date: October 8, 2025
Host: Al Letson
Guest: Tech journalist Karen Hao
This gripping edition of Reveal investigates the real-world dangers posed by the rapid expansion of artificial intelligence, focusing on the threats to democracy and humanity’s resources. Tech journalist Karen Hao returns to offer an unflinching assessment of AI’s current trajectory, drawing from her in-depth reporting and her new book, Empire of Dreams and Nightmares: In Sam Altman’s OpenAI. The episode spotlights labor exploitation, environmental risks, and the unchecked consolidation of power, cutting through science fiction distractions to focus on the present-day stakes for society.
AI’s Growing Influence:
Karen Hao charts how tech giants, particularly OpenAI, have amassed unprecedented influence, evolving from industry outsiders to powerful behemoths with vast resources.
Unclear Milestones & Definitions:
There’s little scientific consensus on what either human or artificial intelligence truly is, muddying debates about where AI development is heading.
Origins of OpenAI:
OpenAI began as a nonprofit co-founded by Elon Musk and Sam Altman, intending to counter Google’s dominance. Over time, financial pressures forced OpenAI to become increasingly for-profit as they chased ever-larger, costlier AI models.
Leadership Drama at OpenAI:
Sam Altman’s “slipperiness” and adeptness at telling people “what they want to hear” drive the narrative of upheaval, with Musk ultimately leaving and later resenting OpenAI’s success.
Human Labor Behind AI:
OpenAI and others rely on workers (often in low-wage countries like Kenya or the Philippines) to annotate data, moderate content, and “teach” models how to interact. This mirrors old patterns of tech industry exploitation.
Trauma of Content Moderation:
Moderators are exposed daily to hateful, graphic, and disturbing content, including AI-generated examples designed to test and filter the most horrific material.
Skyrocketing Energy Demands:
The scale of energy consumption required for AI is staggering—with industry projections showing data centers could require an energy boost equaling two to six times California’s current usage by 2030, mainly from fossil fuels.
Freshwater Competition:
Sophisticated cooling systems demand vast amounts of fresh water. Data centers are frequently sited in regions already experiencing water scarcity, depriving communities of vital resources.
Community Impacts:
In Montevideo, Uruguay, and places across the U.S., local populations face drought and polluted air while tech companies propose new facilities that would worsen resource crunches.
Distraction from Real Risks:
Silicon Valley fuels science fiction fears (rogue AI, “Skynet”) while quietly deflecting attention from more pressing, tangible dangers like environmental exploitation and economic upheaval.
Consolidation & Demands for Accountability:
The unchecked accumulation of power by tech companies endangers democracy and public interests.
Shifting from Top-Down to Bottom-Up Governance:
Given current political gridlock, grassroots activism, lawsuits, and community action are driving the most effective forms of resistance and regulation.
Examples of Pushback:
Artists and writers suing companies over copyright; communities (like Tucson) blocking resource-hungry data centers; students and teachers negotiating more nuanced AI policies.
On the lack of consensus and transparency in AI:
“We are not actually getting an accurate picture on the capabilities of these systems…and all of the different ways that they break down, because a lot of these companies now censor that kind of research…” – Karen Hao (08:35)
On environmental injustice:
“[Data centers] just being run on 35 unlicensed methane gas turbines that’s pumping thousands of pounds of toxins into working class communities in Memphis, Tennessee who have long had a history of environmental injustice…” – Karen Hao (25:37)
On the real existential threat:
“The real existential risk is that we are literally leading to the overconsumption of our planet…allowing the tech industry to consolidate this extraordinary degree of resources unlike anything ever before.” – Karen Hao (29:48)
On possible paths forward:
“The beautiful thing about democracy is that you can still have leadership from the bottom.” – Karen Hao (31:12)
Throughout, the tone remains urgent yet grounded, eschewing wild speculation for hard investigative realities. Karen Hao’s insights shift the focus from hypothetical dangers to the tangible and immediate: labor exploitation, climate impact, resource theft, and democratic decline. Yet the episode ends on a note of hope, arguing that through bottom-up action and investment in responsible AI, we can secure the technology for public good—if we act now.
For an awakened listener or concerned citizen, this episode is a call to vigilance, skepticism, and action against the unchecked expansion and resource appetites of Silicon Valley’s AI giants.