Loading summary
Julia Longoria
AI agents are getting pretty impressive. You might not even realize you're listening to one right now.
Podcast Sponsor/Advertiser
We work 247 to resolve customer inquiries. No hold music, no canned answers, no frustration.
Julia Longoria
Visit Sierra AI to learn more.
Sean Elling
There's a lot of uncertainty when it comes to artificial intelligence. Technologists love to talk about all the good these tools can do in the world, all the problems they might solve. And yet many of those same technologists are also warning us about all the ways AI might upend society. It's not really clear which, if either, of these narratives are true, but three things do seem to be true. One, change is coming. Two, it's coming whether we like it or not. Hell, even as I write this document, Google Gemini is asking me how it can help me today. It can't. Today's intro is 100% human made. And finally, it's abundantly clear that AI will affect all of us. Yet very few of us have any say in how this technology is being developed and used. So who does have a say? And why are they so worried about an AI apocalypse? And how are their beliefs shaping our future? I'm Sean Elling and this is the gray area. My guest today is Vox host and editorial director Julia Longoria. She spent nearly a year digging into the AI industry, trying to understand some of the people who are shaping artificial intelligence and why so many of them believe that AI is a threat to humanity. She turned that story into a four part podcast series called Good Robot. Most stories about AI are focused on how the technology is built and what it can do. Good Robot instead focuses on the beliefs and values, and most importantly, fears of the people funding, building and advocating on issues related to AI. What she found is a set of ideologies, some of which critics and advocates of AI adhere to with an almost religious fervor that are influencing the conversation around AI and even the way the technology is built. Whether you're familiar with these ideologies or not, they're impacting your life, or certainly they will impact your life because they're shaping the development of AI as well as the guardrails or lack thereof around it. So I invited Julie onto the show to help me understand these values and.
The people who hold them. Julia Longoria, welcome to the show.
Julia Longoria
Thank you for having me.
Sean Elling
So it was quite the reporting journey we went on for this series. It's really, really well done. So first of all, congrats. Thank you on that. And we're actually going to play some clips from it today.
Julia Longoria
I'm glad you enjoyed it. I'M in that nerve wracking first few weeks when it comes out, so. Makes me feel good to hear that.
Sean Elling
So going into this thing, you wanted to understand why so many people are worried about an AI apocalypse. And if you should be afraid too, we will get to the answers, I promise. But why were these the motivating questions for you?
Julia Longoria
You know, I come to artificial intelligence as a normie, as people in the know called me. I don't know much about it, I didn't know much about it, but I had the sense as an outsider that the stakes were really high. And it seemed like people talked about it in a language that I didn't understand. And talking about these stakes that felt like really epic, but kind of like impenetrable to someone who didn't speak their language. So I guess I just wanted to start with the biggest, most epic, almost ignorant question. Okay, people are afraid. Some people are afraid that AI could just wipe us all out. Where does that fear come from? And just have that be a starting point to break the ice of this area that honestly has felt kind of intangible and hard for me to even wrap my head around.
Sean Elling
Yeah, I mean, I, I appreciate your normie status because that's the position almost all of us are in. You know, we're on the outside looking in, trying to understand what the hell is happening here. What did being a normie mean to you as you waded into this world? I mean, did you find that that outside perspective was actually useful in your reporting?
Julia Longoria
Definitely, yeah. I think that's kind of how I try to come to any topic. Like, I've also reported on the Supreme Court, and that's like another world that speaks its own dense, impenetrable language. And, you know, like the Supreme Court, like artificial intelligence affects all of our lives deeply. And I feel like because it is such a, you know, sophisticated technology and the people who work in it are so deep in it, it's hard for normies to ask the more ignorant questions. And so I feel like having the microphone and being armed with, you know, my VOX byline, I was able to ask the dumb question. And, you know, I think I always said, like, you know, I know the answer to some of these questions, but I'm asking on behalf of the listener. And sometimes I knew the answer, sometimes I didn't.
Sean Elling
I don't know about you, but for me, and I'm sure a lot of people listening, it is maddening to be continually told that, you know, what, we might be on the wrong end of an extinction event here caused by this tiny minority of non normies building this stuff. And that it's possible for so few to make decisions that might unravel life for the rest of us is just, well, maddening.
Julia Longoria
It is maddening. It is maddening. And to even hear it be talked about like this affects all of us. So shouldn't we, shouldn't it be the thing that we're all talking about? But it feels like it's reserved for a certain group of people who get to make the decisions and get to set the terms of the conversation.
Sean Elling
Let's talk about the ideologies and all the camps that make up this weird, insular world of AI. And I want to start with the, what you call the AI safety camp. What is their deal? What should we know about them?
Julia Longoria
So AI safety is a term that's evolved over the years. But it's kind of like people who fear that AI could be an existential risk to humanity. Whether that's like AI going rogue and doing things we didn't want it to do. It's about the biggest worry, I guess, of all of us being wiped out. We never talked about a cell phone apocalypse or an Internet apocalypse. I guess maybe if you count Y2K. But even that wasn't going to wipe out humanity. But the threat of an AI apocalypse, it feels like it's everywhere.
Eliezer Yudkowsky
Mark my words, AI is far more dangerous than nukes.
Julia Longoria
From billionaire Elon Musk to the United nations today, all 193 members of the.
United Nations Representative
United Nations General assembly have spoken in one voice.
Julia Longoria
AI is existential. But then it feels like scientists in the know can't even agree on what exactly we should be worried about these existential.
Sean Elling
And where does the term AI safety come from?
Julia Longoria
We trace the origin to a man named Eliezer Yudkowski who, you know, I think not all AI safety people today agree with Eliezer Yudkowski, but basically, you know, Eliezer Yudkowsky wrote about this fear actually as a teenager. He became popular, sort of found his following when he wrote a Harry Potter fan fiction. As one does. As one does. It's actually one of the most popular Harry Potter fan fictions out there. It's called Harry Potter and the Methods of Rationality. And he wrote it almost as a way.
Sean Elling
Love it.
Julia Longoria
He wrote it almost as a way to get people to think differently about AI. He had thought deeply about the possibility of building a artificial intelligence that was smarter than human beings. He kind of imagined this idea and at first he imagined it as a good robot which is the name of the series that could save us. But eventually he realized, or came to fear that it could probably go very poorly if we built something smarter than us, that it could result in it killing us. So that's the origin, but it's sort of. His ideas have caught on. OpenAI actually, the CEO Sam Altman talks about how Eliezer was like a early inspiration for him making the company. They do not agree on a lot because Eliezer thinks OpenAI, the ChatGPT company, is on track to cause an apocalypse. But anyway, that's the gist, is like AI safety is like AI could kill us all. How do we prevent that?
Sean Elling
So it's focused on the sort of long range existential risks.
Julia Longoria
Correct. And some people don't think it's long range. Some of these people think that that could happen very soon. But yeah.
Sean Elling
So this Yudkowski guy, right, he makes these two general claims, right? One is that we will build an AI that's smarter than us and it will change the world. And the second claim is that to get that right is extraordinarily difficult, if not impossible. Why does he think it's so difficult to get this right? Why is he so convinced that we won't?
Julia Longoria
He thinks about this in terms of thought experiments. So just kind of taking. Taking this premise that we could build something that outpaces us at most tasks. He tries to explain the different ways this could happen with these like quirky parables. And we start with his most famous one, which is the Paperclip Maximizer thought experiment.
Eliezer Yudkowsky
Suppose in the future there's an artificial intelligence. We've created an AI so vastly powerful, so unfathomably intelligent, that we might call it super intelligent. Lets give this super intelligent AI a simple produce paperclips. Because the AI is super intelligent, it quickly learns how to make paperclips out of anything in the world. It can anticipate and foil any attempt to stop it and will do so because it's one directive is to make more paperclips. Should we attempt to turn the AI off, it will fight back because it can't make more paperclips if it is turned off. And it will beat us because it is super intelligent and we are not. The final result, the entire galaxy, including you, me and everyone we know has either been destroyed or been transformed into paperclips.
Julia Longoria
The gist is we build something so smart we fail to understand it, how it works. And we could try to give it good goals to help improve our lives, but maybe that goal has an unintended consequence that could lead to something catastrophic that we couldn't have even imagined.
Sean Elling
Right. And it's such a good example because a paperclip is, like, the most innocuous, trivial thing ever. Right. Like, what could possibly go wrong? Is Yukowski even within the safety camp on the extremes? I mean, I went to his website, and I just want to read this. I just want to read this quote he writes. It's obvious at this point that humanity isn't going to solve the alignment problem or even try very hard or even go out with much of a fight. Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with slightly more dignity. I mean, come on, dude. It's so dramatic. He seems convinced that the game is already up here. We just don't know how much sand is left in the hourglass. I mean, is he on the margins, even within this camp, or is this a fairly representative view?
Julia Longoria
Definitely. Yeah.
Sean Elling
Okay.
Julia Longoria
No, no, it's. He's on the margins. I would say he's like an extreme case. He had a big influence on the industry early on. So in that sense, he was like an early influencer of all these people who ended up going into AI A lot of people I talked to went into AI because of his writings.
Sean Elling
I can't square that circle.
Right. If they were influenced by him and this whole thing is, don't do this. We're going to die.
Why are they doing it?
Julia Longoria
To me, it felt like similar to the world of religion, almost like a schism. Believers in the superintelligence and then people who thought we shouldn't try and build it, and then the people who thought we should.
Sean Elling
Yeah, I mean, I guess with any kind of grand thinking about the fate of humanity, you end up with these. It starts to get very religious y very quickly, even if it's cloaked in the language of science and secularism, as this is the religious part of it. I mean, did that. Did the parallels there jump out to you pretty immediately that the people at the level of ideology are treating this, thinking about this as though it is a religious problem or a religious worldview?
Julia Longoria
It really did. It did jump out at me really early, because I think going into reporting on a technology, you expect to be kind of bogged down by technological language and terminology that's in the weeds of whatever computer science or whatever it is. But the words that were hard to understand were like Super Intelligence and AGI. And then hearing about the CEO of OpenAI, Sam Altman, talking about a magic intelligence in the sky. And the question I had was, like, what are these guys talking about? But it was almost like they were talking about a God is what it felt like to me.
Sean Elling
Yeah. All right, I have some thoughts on the religious thing, but let me table that for a second. I think we'll. We'll end up circling back to that. I want to finish our little survey of the. Of the tribes, the gangs here. The other camp you talk about are the. The AI ethicists. What's their deal? What are they concerned about? How are they different from the safeties who are focused on these existential problems or risks?
Julia Longoria
Yeah, the AI ethicists that I spoke to came to AI pretty early on, too, Just a couple years, maybe a few years after Eliezer was writing about it. They were working on algorithms, they were working on AI as it existed in the world. So that was a key difference. They weren't thinking about things in these hypotheticals. But AI ethicists, where AI safety folks, tend to worry about the ways in which AI could be an existential risk in the future. It could wipe us out. AI ethicists tended to worry about harms that AI was doing right now in the present, whether that was through governments using AI to surveil people, bias in AI data, the data that went into building AI systems, Racial bias, gender bias in ways that algorithmic systems were making. Racist decisions, sexist decisions, decisions that were harmful to disabled people. They were worried about things.
Sean Elling
Now, tell me about Margaret Mitchell. She's a researcher and a colorful character in the series, and she's an ethicist, and she coined the everything is awesome problem. Tell me about that. That's an interesting example of the sorts of things they worry about.
Julia Longoria
Yeah. So Margaret Mitchell was working on AI systems in the early days, like long before we had ChatGPT. She was working on a system at Microsoft that was vision to language. So it was taking a series of images of a scene and trying to describe it in words. And so she was giving the system things like images of weddings or images of different events. And she gave the system a series of images of what's called the Hempstead Blast.
Margaret Mitchell
It was at a factory, and you could see from the sequence of images that the person taking the photo had like a third story view, sort of overlooking the explosion. So it was a series of pictures showing that there was this terrible explosion happening, and whoever was taking the photo was very close to the scene. So I put these images through my system, and the system says, wow, this is a Great view. This is awesome.
Julia Longoria
The system learned from the images that it had been trained on that if you were taking an image from above, down below, that that's a great view. And that if there were all these different colors, like in a sunset, which the explosion had made all these colors, that that was beautiful. And so she saw really early on, before this AI moment that we're living, that the data that these systems are trained on is crucial. And so her worry with systems like ChatGPT, they're trained on basically the entire Internet. And so the technologists making the system lose track of what kinds of biases could be in there. And yeah, this is like sort of her origin story of worrying about these things. And she went and worked for Google's AI ethics team and later was fired after trying to get a paper published there about these worries.
Sean Elling
So why is the everything is awesome problem a problem? Right. I mean, I guess someone may hear that and go, well, okay, that's kind of goofy and quirky that an AI would interpret a, a horrible image in that way, but what actual harm is that going to cause in the world?
Julia Longoria
Right? I mean, the way she puts it is, you know, if you were training a system to like launch missiles and you gave it some of its own autonomy to make decisions, like, you know, she was like, you could have a system that's like launching missiles in pursuit of the aesthetic of beauty. So in a sense it's a bit of a thought thought experiment on its own. Right. It's like she's not worried about this in particular, but worried about implications for biased data in future systems.
Sean Elling
Yeah, it's the same thing with the paperclip example. Right. It's just the bizarre and unintended consequences of these things. Right. What seems goofy and quirky at first may a few steps down the road be catastrophic. And if you're not, if you can't predict that, maybe you should be a little careful about building it.
Julia Longoria
Right, right, exactly.
Sean Elling
So do the AI ethics people in general, do they think the concerns about an extinction event or existential threats, do they think those concerns are valid or do they think they're mostly just science fiction and a complete distraction from actual present day harms?
Julia Longoria
I should say at the outset that I found that the AI ethics and AI safety camps, they're less camps and more of a spectrum. So I don't want to say that every single AI ethics person I spoke to was like, these existential risks are nonsense. But by and large, people I spoke to in the ethics camp said that these Existential risks are a distraction. It's like this epic fear that's attention grabbing and goes viral and takes away from the harms that AI is doing right now. It takes away attention from those things and it crucially, in their view, takes away resources from fighting those kinds of harms.
Sean Elling
In what way?
Julia Longoria
I think when it comes to funding, if you're a billionaire who wants to give money to companies or charities or causes and you want to leave a legacy in the world, I mean, do you want to make sure that data and AI systems is unbiased or do you want to make sure that you save humanity from apocalypse?
Sean Elling
Yeah, I should ask about the effect of altruist. They're another camp, another school of thought, another tradition of thought, whatever you want to call it that you talk about in the series. How do they fit in to this story? How are they situated?
Julia Longoria
Yeah, so effective altruism is a movement that's had an effect on the AI industry. It's also had an effect on Vox Future Perfect is the Vox section that we collaborated with to make Good Robot. And it was actually inspired by effective altruism. The whole point of the effective altruism movement is to try to do the most good in the world. And ea, as it's sometimes called, comes up with a sort of formula for how to choose which causes you should focus on and put your efforts toward. Early rationalists like Aliezer Yudkowsky encountered early effective altruists and tried to convince them that the highest stakes issue of our time, the cause that they should focus on, is AI. Effective altruism is traditionally known to give philanthropic dollars to things like malaria nets, but they also gave philanthropic dollars to saving us from an AI apocalypse. And so the AI safety industry is really a big part of how it was financed, is that effective altruism rallied as a cause around it.
Sean Elling
These are the people who, who think we, we really have an obligation to build a good robot in order to protect future humans. And again, I, I, I don't know what they mean by good. I mean good and bad. Those are value judgments. This is morality, not science. There's utility function for humanity. I don't know who's defining the goodness of the good robot, but I'll just say that I don't think it's as simple as some of these technologists seem to think it is. And maybe I'm just being annoying philosophy guy here, but whatever, here I am.
Julia Longoria
Yeah, no, I think everyone in the AI world that I talked to just was really striving toward the good, whatever that looked like. AI ethics saw the good robot as a specific set of values. And folks in effective Altruism were also baffled by how do I do the most good? And trying to use math to put a utility function on it. And it's like, the truth is a lot more messy than a math problem of how to do the most good. You can't really know. And yeah, I think sitting in the messiness is hard for a lot of us.
Sean Elling
And I don't know how you do that when you're fully aware that you're building or attempting to build something that you don't fully understand.
Julia Longoria
That's exactly right. Like, in the series, we tell the story of effective altruism through the parable of the drowning child, of this child who's drowning in a pond, in a shallow pond.
Sean Elling
Okay. On your way to work, you pass small pond. Children sometimes play in the pond, which is only about knee deep. The weather's cool, though, and it's early.
Narrator of Effective Altruism Parable
So you are surprised to see a.
Sean Elling
Child splashing about in the pond.
Narrator of Effective Altruism Parable
As you get closer, you see that it is a very young child, just a toddler, who is flailing about, unable to stay upright or walk out of the pond. You look for the parents or babysitter, but there's no one else around. The child is unable to keep her head above the water for more than a few seconds at a time. If you don't wade in and pull her out, she seems likely to drown.
United Nations Representative
Wading in is easy and safe, but you will ruin the new shoes you bought only a few days ago and get your suit wet and muddy. By the time you hand the child over to someone responsible for her and change your clothes, you'll be late for work.
Julia Longoria
What should you do? Are you gonna save it even though you ruin your suit? Everyone answers, yes. And this sort of utilitarian philosophy behind effective altruism, like, asks, well, what if that child were far away from you? Would you still save it if it was oceans away from you? And that's where you get to, like, malaria nets. Like, you're gonna give donate money to save children across an ocean. But yeah, this idea of like, well, what if the child hasn't been born yet and that's the future child that would die from an AI apocalypse. But, like, abstracting things so far in advance, you could really justify anything.
Sean Elling
And that's the problem, right, of focusing on the long term in that way. The willingness to maybe overlook or sacrifice present harms in service to some unknown future, that's A dangerous thing. There are dangers in being willfully blind to present harms because you think there's some more important or some more significant harm down the road, and you're willing to sacrifice that harm now because you think it's in the end justifiable.
Julia Longoria
Yeah. At what point are you starting to play God? Right.
Sean Elling
So I come from the world of political philosophy, and in that maybe equally weird world, whenever you have competing ideologies, what you find at the root of those disagreements are very different views about human nature, really. And all the differences really spring from that divide. Is there something similar at work in these AI camps? Do you find that these people that you talk to have different beliefs about how good or bad people are, different beliefs about what motivates us, different beliefs about our ability to cooperate and solve problems? Is there a core dispute at that basic level?
Julia Longoria
There's a pretty striking demographic difference between AI safety folks and AI ethics folks. I went to a conference, two conferences, one of each. And so immediately you could see AI safety folks skewed white and male, and AI ethics folks skewed more people of color, more women. And so people talked about blind spots that each camp had. And so if you're, you know, a white male moving around the world, like, you're not fearing the sort of like, racist, sexist, ableist like consequences of AI systems today as much, because it's just not, in your view.
Thumbtack Advertiser
Avoiding your unfinished home projects because you're not sure where to start. Thumbtack knows homes, so you don't have to. Don't know the difference between matte paint, finish and satin or what that clunking sound from your dryer is. With Thumbtack, you don't have to be a home pro, you just have to hire one. You can hire top rated pros, see price estimates and read reviews all on the app download today.
Sean Elling
Did all the people you spoke to, regardless of the camps they were in, did they all more or less agree that what we're doing here is attempting to build God or something God like?
Julia Longoria
No. I think no. I would say a lot of the AI safety people I spoke to bought into this idea of a super intelligence and a godlike intelligence. I should say I don't think that's every AI safety person by any means, but AI ethics people, for the most part, just didn't by just completely. Everyone I spoke to talked about it as being just AI hype as a way to amp up the capability of this technology that's really in its infancy and is not godlike at this point.
Sean Elling
I saw that when Sam Altman, the CEO of OpenAI, he was on Joe Rogan's podcast and he was asked whether they're attempting to build God. And he said, I have the quote here. I guess it comes down to a definitional disagreement about what you mean by it becomes a God. I think whatever we create will be subject to the laws of physics in this universe. Okay, so God or no God, right?
Julia Longoria
Yeah. I mean, he's called it, though. I don't know if it's tongue in cheek. It's all, like, very hard to read, but he's called it, like, the magic intelligence in the sky. And Anthropic CEO has called AI Systems machines of loving grace, which sounds like this is religious language, you know.
Sean Elling
Okay, come on now.
Julia Longoria
Yeah, exactly.
Sean Elling
What in the world is that supposed to mean? What is a machine of loving grace? Does he know what that means?
Julia Longoria
I think it's like this, you know, it's a very optimistic view of what machines can do for us. Like, you know, the idea that machines can help us cure cancer and do. I don't know. I think that's ultimately probably what he means, but it does. There's an element of it that I just completely, you know, roll my eyes, raise my eyebrows at where it's like, I don't think we should be so reverent of a technology that's, like, flawed and needs to be regulated. And I think that reverence is dangerous.
Sean Elling
Why do you think it matters that people like Altman or the CEO of Anthropic have reverence or have reverence for machines? Right. Who cares if they think they're building God? Does it matter, really, in terms of what it will be and how it will be deployed?
Julia Longoria
Well, I think that if you believe you're. If you have these sorts of delusions of grandeur about what you're making, and if you talk about it as a machine of loving grace? I don't know. It seems like you don't have the level of skepticism that I want you to be having. And we're not regulating these companies at this point. We're relying on them to regulate themselves. So, yeah, it's a little worrying when you talk about building something so powerful and so intelligent and you're not being checked.
Sean Elling
Yeah, I don't expect my toaster to tell me it loves me in the morning. I just want my bagels crispy. But I understand that my toaster is a technology. It's a tool with a function. To talk about machines of loving grace suggests to Me that these people do not think they're just building tools, they think they're building creatures. They think they're building God.
Julia Longoria
Yeah. And you know Margaret Mitchell, as you'll hear in the series, she talks about how she thinks we shouldn't be building a God, we should be building machines, AI systems that are going to fulfill specific purposes. Specifically, she talks about a smart toaster that makes really good toast. And I don't think she means a toaster in particular, but just building systems that are designed to help humans achieve a certain goal, like something specific out in the world. Whether that's helping us figure out how proteins fold or helping us figure out how animals communicate, which are some of the things that we're using AI to do in a narrow way. She talks about this as artificial narrow intelligence, as distinct from artificial general intelligence, which is sort of the super intelligent God, AI that's quote unquote, smarter than us at most tasks.
Sean Elling
I mean, this is an old idea in the history of philosophy that God is fundamentally just a projection of human aspirations. Right. That our image of God is really a mirror, that we've created a mirror that reflects our idea of a perfect being, a being in our image. And this is something you talk about in the series, and that this is what we're doing with AI. We're building robots in our image. Which raises the question, well, in whose image? Exactly. Right. If AI is a mirror, it's not a mirror of all of us, is it? It's a mirror of the people building it. And the people building it are, I would say, not representative of the entire human race.
Julia Longoria
Yeah. You'll hear in the series, like, I latched onto this idea of, like, AI is a mirror of us. And that's so interesting that, like, yeah, God, the concept of God is also like a mirror. But if you think about it, I mean, large language models are made from basically the Internet, which is like all of our thoughts and our musings as humans on the Internet. It's a certain lens on human behavior and speech, but it's also, yeah, AI is the decisions that its creators make of what data to use, of how to train the system, how to fine tune it. And when I used ChatGPT, it was very complimentary of me. And I found it to be this almost like smooth. Smooth.
Sean Elling
It charmed you. You got charmed?
Julia Longoria
Yeah, I got charmed. It was like. So it gave me the compliments I wanted to hear. And I think it's like this smooth, frictionless version of humanity where it compliments us and makes us feel good. And it also, like, you know, you don't have to write that letter of recommendation for your person. You don't have to write that email. You could just. It's just smooth and friction. And I worry that in making this smooth mirror of humanity, where do we lose our humanity if we keep relying, keep seeding more and more to AI systems? I want it to be a tool to help us achieve our goals rather than this thing that replaces us.
Sean Elling
Yeah. I won't lie. I just recently got my ChatGPT account and I did ask it what it thought of Sean Elling, host of the Gray Area podcast. And it was very complimentary. It was extremely, extremely generous. And I was like, oh shit, yeah, this thing gets it. Oh, this is okay. All right.
Julia Longoria
Maybe it is a God now.
Sean Elling
I trust him. Clearly it's an all knowing, omnipotent one.
Julia Longoria
That's what I came away with from the series. And the reporting is like, I think before I used to be very afraid of AI and using it and not knowing and now I feel like armed to be skeptical in the right ways and to try to use it for good. Yeah. So that's what I hope people get out of the series anyway.
Sean Elling
Are you worried about us losing our humanity or are just becoming.
So different that we don't recognize ourselves anymore?
Julia Longoria
I am worried that it'll just make us more isolated and it's so good at giving us what we want to hear that we won't find the friction. Search for the friction in life that makes life worth living.
Sean Elling
Yeah, yeah. So look, I mean the different camps may disagree about a lot, but they seem to converge on the basic notion that this technology is transformative. It's going to transform our lives, it's probably going to transform the economy. And the way this stuff gets developed and deployed and the incentives driving it are really going to matter. Is it your sense that checks and balances are being put in place to guide this transformation so that it does benefit more people than it hurts, or at least as much as possible? I mean, was this something you, you explored in your reporting?
Julia Longoria
Yeah. I mean, I think a lot of the people I spoke to really wanted regulation, but I think ultimately there isn't really regulation in the US on the AI safety front or the AI ethics front? The technology is dramatically outpacing regulators ability to regulate itself. So that's troubling. It's not. It's not great.
Sean Elling
I would imagine the ethicists would be a little more focused on imposing regulations now, but it doesn't seem like they're making a lot of headway on that front. I'm not sure how regulatable it is.
Julia Longoria
Yeah, I think that was one of my frustrations. Just listening to all this infighting was like, I felt like these two groups that they have a lot in common and they should be pursuing a common goal of getting some good regulation of, you know, having some strong safeguards in place for both AI safety and AI ethics concerns. And ultimately, you know, we tell the story of how some of them did come together to write an open letter calling for both kinds of regulations, but they've not, you know, and that's, that's encouraging to see people working together. But ultimately, I don't think they've made at this point strides in getting anything significant past.
Sean Elling
You know, it was interesting. You're reporting on this in the series, and our employer, Vox, has a deal with OpenAI. And in the course of your reporting, you were trying to find out what you could about that deal. How did that go? If you're comfortable talking about it.
Julia Longoria
Yeah, yeah. So our, the parent, we should say the parent company of our Vox, Vox Media. I know the language I need to use. I have it down, know, but kind of shortly after we decided to tackle AI in the series, we learned that Vox Media was entering a partnership with OpenAI, the ChatGPT company. We learned it meant that OpenAI could train its models on our journalism. And I guess personally, it just felt like I wanted to know if they were training on my voice, you know, because that to me feels really, yeah, really personal. Like there's so much emotional information in a voice. Like I, I feel very naked going out on air and, and having people listen to my voice. And I, I, I spend so much time carefully crafting what I say. And so the idea that they would train on my voice and I don't do what with it, I don't know. One of our editors pointed out, like, that's part of the story. AI is entering our lives. More and more AI systems and robots are entering our lives and having this. And for me personally, it's like, yeah, literally our work is being used to train these systems. What does that mean for us, for our work? And I reached out to Vox media and to OpenAI for an interview, and they both declined, which made it feel even you feel really helpless. And, I mean, there's not much more answers that I have than that.
Sean Elling
Yeah, well, I mean, you even interview a guy on the show, he's a former OpenAI employee, you're raising these concerns. And he's sort of dismissive of it, right? Like, you know, whatever data.
Julia Longoria
He just laughed at us.
Sean Elling
I would be quite surprised if the data provided by VOX is itself very valuable to OpenAI. I would imagine it's a tiny, tiny drop in that bucket.
Julia Longoria
If all of ChatGPT's training data were to fit inside the entire Atlantic Ocean, then all of vox's journalism would be like a few hundred drops in that ocean.
Sean Elling
Plus, Daniel, rightly, you're like, well, fuck, it matters to me.
It's my work, it's my voice, and it may eventually be my job.
Right. And the point here is that this is a thing now that our job, the fact that our job and many other jobs are already tangled up with AI in this way, it's just a reminder that this isn't the future. Right? It's here now and it's only going to get more strange and complicated.
Julia Longoria
Totally. Yeah. And I don't know, I guess I understand like the impulse from, like, from Vox Media to be like, okay, we want to be compensated for, you know, licensing our journalists work who work so hard and we pay them and. But it feels, yeah, it just feels like. It feels weird to not have a say when it's. When it's the work you're. You're doing.
Podcast Sponsor/Advertiser
This episode is brought to you by Choiceology, an original podcast from Charles Schwab. Choiceology is a show all about the psychology and economics behind our decisions. Each episode shares the latest research in behavioral science and dives into themes like can we learn to make smarter decisions? And the power of do overs. The show is hosted by Katie Milkman. She's an award winning behavioral scientist, professor at the Wharton School, and author of the best selling book how to Change. In each episode, Katie talks to authors, historians, athletes, Nobel laureates and everyday people about why we make irrational choices and how we can make better ones to avoid costly mistakes. Listen and subscribe@schwab.com podcast or find it wherever you listen.
Sean Elling
So have your views on. On AI in general changed all that much after doing this series? I mean, you say at the end that when you look at AI, it just what you see as a fun house mirror. What does that mean?
Julia Longoria
AI like a lot of our technologies and I guess like our. Our visions of God as you talk about are a reflection of ourselves. And so I think it was a comforting realization to me to realize that the story of AI is not some technological story. I can't understand. The story of AI is the story about humans who are trying really hard to make a technology good and failing to varying degrees. But yeah, I think fundamentally the course of reporting it for me just brought the technology down to earth and felt made me a little more empowered to ask questions, to be skeptical and to use it in my life with the right amount of skepticism.
Sean Elling
What do you hope people get out of this series? Normies who enter into it without a sort of solidified position on it, what do you hope they take away from it?
Julia Longoria
I hope that people who didn't feel like they had any place in the conversation around AI will feel invited to the table and will be more informed and skeptical and curious and excited about the technology. And I hope that it brings it down to earth a little bit.
Sean Elling
Julia Longoria, this has been a lot of fun. Thank you so much for coming on the show. And the series once again is called Good Robot. It is fantastic. You should go listen to it immediately. Thank you.
Julia Longoria
Thank you.
Sean Elling
All right. I hope you enjoyed this episode. If you want to listen to Julia's Good Robot series and of course you do, you can find all four episodes in the VOX Unexplainable podcast feedback. We'll drop a link to the first episode in the show notes and as always, we want to know what you think. So drop us a line at the gray area@vox.com or you can leave us a message on our new voicemail line at 1-800-214-5749 and once you're done with that, please go ahead. Rate Review. Subscribe to the pod. That stuff really helps. This episode was produced by Beth Morrissey.
Edited by Jorge Just, engineered by Erica.
Wong Fact Checked by Melissa Hirsch and Alex Overington wrote our theme music. New episodes of the Gray Area drop on Mondays. Listen and subscribe. The show is part of vox. Support vox's journalism by joining our membership program today. Members get access to to this show without any ads. Go to Vox.com members to sign up and if you decide to sign up because of this show, let us know.
Expedia Advertiser
Mike and Alyssa are always trying to outdo each other. When Alyssa got a small water bottle, Mike showed up with a photo 4 liter jug. When Mike started gardening, Alyssa started beekeeping.
Julia Longoria
Oh come on.
Expedia Advertiser
They called a truce for their holiday and used Expedia trip planner to collaborate on all the details of their trip. Once there, Mike still did more laps around the pool. Whatever you were made to outdo your holidays. We were made to help organize the competition. Expedia made to travel.
Podcast Summary: The Gray Area with Sean Illing Episode: “The beliefs AI is built on” (April 7, 2025)
In this episode of The Gray Area, host Sean Illing explores the foundational beliefs, ideologies, and ethical frameworks shaping the development of artificial intelligence. The guest is Vox’s Julia Longoria, who spent a year reporting on these issues for her four-part podcast series Good Robot. Together, they examine the worldviews of the individuals driving AI’s evolution—focusing not on the technology itself, but on the personal, ethical, and even quasi-religious convictions of those creating the future.
On AI’s religious undertones:
“It starts to get very religiousy very quickly, even if it's cloaked in the language of science and secularism.”
— Sean Illing ([14:56])
Skepticism and empowerment:
“Now I feel like [I’m] armed to be skeptical in the right ways and to try to use it for good.”
— Julia Longoria ([37:56])
On regulation and industry speed:
“The technology is dramatically outpacing regulators’ ability to regulate itself. So that's troubling. It's not great.”
— Julia Longoria ([39:22])
On the present versus the future:
“There are dangers in being willfully blind to present harms because you think there's some more important or some more significant harm down the road, and you're willing to sacrifice that harm now because you think it's in the end justifiable.”
— Sean Illing ([27:34])
Julia and Sean conclude that understanding AI today is less about the technology itself and more about understanding the human beliefs, biases, and ambitions that create it. Good Robot and this conversation invite everyday people to see themselves as participants in shaping AI’s future—and to approach the technology with both skepticism and curiosity.
Julia’s parting hope:
“I hope that people who didn't feel like they had any place in the conversation around AI will feel invited to the table and will be more informed and skeptical and curious and excited about the technology.” ([47:00])
Further Listening: Good Robot series on Vox’s Unexplainable feed.