Loading summary
A
Welcome to Lunch with Jamie. Today's guests are Jonathan Wong, who's the Oscar winning producer of Everything Everywhere all at Once and who's the producer of the AI Doc or How I Became an Apocalyptomist. Directed by Daniel Rohrer and Charlie Tyrell. Tristan Harris is the co founder of the center for Humane Technology and one of the featured voices in this documentary. To say this film is critical viewing is an understatement. I truly believe this can go down as one of the most important films that we've had in decades, if not ever. This is the film that's going to make people understand how critical the juncture we are in the world of AI and AGI and how it's now or never that people need to stand up and push back on what's being done
B
in the world of AI.
A
It's not going to come from our tech CEOs and it's not going to come from our elected officials unless we demand it.
B
This is a call to action for
A
everybody in the world. You need to watch the film. You need to share the film. You need to ask your elect officials what they're doing. You need to ask your schools, your work, your friends, what are they doing? When you get around that dinner table with your family. Dinner, dinner table with friends, you need to talk about AI. Everybody needs to band together in the United States and around the world to start figuring out what people are doing and how we're going to harness this power for good. And we're not going to allow the evil darker side to come true. We still have time to make an impact, but it's only going to come with collective action. Now here's my conversation with Jonathan Wong and Tristan Harris. Welcome to another lunch with Jamie. Thanks for joining. Please have your cameras on if you can, if you're able to. By now, I'm sure you've all heard about the AI Doc or how I became an apocalyptomist. It's taken me a long time to learn how to say that word. Directed by Daniel Rohrer and Charlie Tyrrell. Since seeing it at Sundance, I have been obsessed. I've been talking about it and everybody who will listen to me. And I have nothing to do with the film. I'm not involved with it. I've become friends with Jonathan, one of the producers, and I am working with Daniel Rohrer, one of the directors. So I guess I have a little bias, but I was blown away. I had no idea what to expect. So I'm here with Tristan Harris, who's the co founder of the center for Humane Technology and one of the main featured people in the doc. And Jonathan Wong, who's the Oscar winning producer of Everything Everywhere all at Once and is also one of the producers of the doc. Guys, thank you for joining me and us. Really appreciate it. Great to be here with you. So I have so many questions and as I said, I just love this film so much. One of the things is so brilliant about the film is the storytelling tools and the way that Daniel and Charlie decided to do the film. So this challenge of this conversation is, although I know everybody has seen the film because I've been screaming about it, but just in case there's somebody who hasn't seen the film yet, I don't want to spoil it. So, you know, so I'm gonna, we're gonna try and talk about it. And for those in the know, with leaving out kind of the main storytelling device that Charlie and ultimately Daniel really are the ones who employs while we tie this conversation. So. But Jonathan, I do want to start with you. You know this because as a fellow producer, I know since Tristan is kind of like the talent in the film, sometimes these conversations skew towards Channing Tatum, if I'm being interviewed or Derek's in France. But so I want to throw the first question to you because as a fellow producer, we are many times the one of the unsung parts of the film in the equation. And I want to get a little bit of sense of how the film came to be and sort of this the genesis of the film.
C
Yeah. Well, thank you for championing the movie and being such a fan. So your newsletter was awesome to read and, and thank you for championing producers because that's, you know, something we don't always get championed for. The genesis of the story was that right after everything everywh kind of went through its award run, we had this kind of open Runway to meet with anyone we wanted. And rather than calling up, you know, fill in the blank celebrity or fill in the blank director or writer or executive, Dan Kwan and I had both been listening pretty much like on loop your undivided attention throughout the pandemic and just had gone through like their little like satellite of thinkers around and then become inundated with this, this kind of thinking around the meta crisis and like, what, what are the things above that are driving all the other crises? And I was like, that's such an articulate and wonderful way of stating the problem that like, we want to hear from these guys for our future Movies, because we write in the sci fi kind of action space. And we're like, how can we take these bigger ideas and then bring them to the populace in a fun package? And so we met up with Tristan and Aza and we could just see the weight of. Of what they'd been looking at in their eyes and on their shoulders. And they were like, what do you know about A.I. we're like, please God, don't tell us that we need to help you, you know, really save the world with AI and they're like, you have to help us save the world with this. And we were like, oh, no. So through our friendship, like, slowly but surely, and talking with Tristan and a lot of other thinkers, Dan and I decided we have to make a documentary. It also coincided with the writers strike. So it just worked out like, we can't make anything else. We can make a documentary. Let's do this in eight to nine months, beat everyone to market and go fast. And then two and a half years later, we finally figured out how to do this tightrope walk and put out the movie that you see. So, you know, even though Tristan is not a producer on the movie, he's, you know, he was there from the very beginning and integral in the whole beginning of this project.
A
Thanks, that's really helpful. And, you know, I've heard you guys talk about this sort of relationship between this film and the film from 1983, the kind of the day.
B
The Day After.
A
Yeah, and I heard Tristan talk about it a lot. You know, was that part of the original genesis in that time about this film? And I'd love to hear from you and kind of Tristan and how that relates. And for people who don't know the film, we just kind of explain it to them.
C
Yeah, I'll kick it to Tristan about, like, the impact of it, because I think what he. What he articulated us was so much about how he went. And it's in the movie, actually, but he went to D.C. and D.C. was like, we can't regulate tech, so go to tech. And then he went to tech and they were like, we can't regulate ourselves. Go to dc. And neither of them were doing anything. So they were like, but you know, what we can do is we need both of them, Both of us need the public will. We need to have this collective consciousness shift around this so that we can act in both the private sector and on the hill. And so Tristan kind of was like, well, there had been a precedent for this where the Day after came out. And I know you talked more cogently about this, Tristan, so I'll kick it to you. But what was important for us to emulate that moment was to really become clear in the same way that, like, it's very easy when you think about atom bomb goes boom, really bad. That's really hard with AI because it's not like there's this catastrophic event that we can all see that there's massive mushroom cloud, Right? And so we were like, well, how do we actually get everyone to see the bad results of AI and not fall into any one tributary, where it becomes polarized, where it becomes bipartisan, and really make everyone see there's a future that none of us want, and let's all wake up to that future that's coming, and let's avoid that.
B
Yeah, I. Beautiful, said Jonathan. And, yeah, and grateful to be on here. And thank you so much for obviously telling the story of how all this got started. We were huge fans of everything, everywhere, all at once. So we couldn't believe when they reached out to us just to get a chance to talk to them, and we started just talking about the broader issues. And it wasn't until I think it was three months later that we actually had a conversation specifically about AI when we got calls from kind of the Oppenheimers inside the AI labs who are saying, we need your help. We need there to be public pressure, because the arms race between these companies is out of control. So just imagine for a second you're getting that phone call. I mean, that felt like a movie where you're getting a call from someone inside an AI lab who's kind of panicking a little bit. Imagine if you got a call from someone inside of something called the Manhattan Project, and they were saying, there's this problem. And you're like, wait a second. I want to believe there's some adults in the world and that you all have this lockdown and that this is going to be okay because you all have been working on AI safety for a long time. There's a lot of people have been working on this. But as Jonathan said, everyone was kind of pointing the finger for someone else to act first. And I do think that that's a lot of what the AI problem represents to us is that if you feel into your own nervous system and the agency that you have in your mammalian body and meat suit, your nervous system, whether you are President Xi Jinping of China, you're just one person. You can't deal with the whole problem. If you're one business leader, it Feels
A
too big for you.
B
If you're one AI safety engineer or if you're one labor union leader, everyone feels like what would be needed to address it is bigger than their own individual experience. I just want to, like, stop there for a second because I think it's actually a subtle and important point is that there's kind of this perceptual mismatch between the collective agency that we need of everybody acting together, and then the experience of this is even bigger than me. And just a few nights ago, we were in New York doing a screening for some people who included some people in the audience who were coaches to some of the CEOs at these companies. And even they were saying, look, we're talking to the CEOs at these AI companies, and they feel powerless. So I want people to just detract that for a second. And so now to sort of switch to the history of this film the Day After. When you have a problem that's so much bigger than just one agent, you really need to be a change in global mindset and a change in culture to create the conditions for something else to happen. Specifically, I think what needed to happen with nukes is that the fear of me losing to you was bigger than the fear of all of us losing. And what the film the Day after did is it visceralized the fear of all of us losing suddenly being greater than the fear of me losing to you. And when you know that I know that you know and I know that you know that I know that we're both more afraid of everyone losing than we are of me losing to you, when that switch happens, that's when I think a new possibility unlocks. Now, AI is much more difficult than nuclear weapons because it's as if. If every country, as they're racing to nuclear weapons also gets boosted. GD growth also gets 24th century science that enables new cancer drugs also gets 24th century medicine also gets. So it's complicated because AI is a simultaneous positive infinity of benefit. You can't even imagine the benefits, truly, and a simultaneous negative infinity at the same time. So it's an object that is, I think, confusing to the human mind. When you think about which way is AI going to go, it's just bigger and more complex in the positive and negative infinity. And one of the things I think the film does so beautifully, that was really a testament to the. To John and Quan and Daniel Rohr. And the whole film team was getting all these voices together in the same movie. Because the thing that's unique about this movie that makes it, you know, I think it's why it's self presumptually calls itself the AI Doc, is that it has all of the voices in it. It has the people who are concerned about the risks that are hitting society right now from deepfakes and undermining democracy. It has the people who are optimistic and saying everything's going to be great. And it has people who are saying it's going to wipe out humanity. And it has the CEOs and they're all in one movie. So you're kind of getting to do this thing where your mind can take in the hyper object, what Timothy Morton, the great philosopher, calls the hyper object of AI. And I think that if we can get that shift of the fear of all of us losing from the collective current arms race dynamics and the current incentives, if that can become bigger collectively for all of humanity than the fear of me losing to you, that's when something else can happen. So I'll stop there.
A
It was interesting. I heard you talk about how Reagan felt after watching that film. And then also when the Russian people sort of saw, got a chance to see the film and how big an effect that had. I mean, you know, and I do, I do. And I can say it because it's not my film. I do think this film will and should have that, that life and that effect. I think that the bigger challenge, as you kind of mentioned, is it's, it's not black and white and it's so unclear to everyone and, and you've laid out some different, you know, you just released a paper and you've laid out there. There are obviously things to do. Yeah, but losing that arms race when the arms aren't a bomb is a lot. And you point out the utopia, the utopian side of things, but, you know, you. So many of the positives, it just makes it so hard to grapple that and lawmakers, I mean, ultimately I think we want our lawmakers to be doing more, but it's really complicated. It's really hard. And like we can, you know, we'd like to think our elected representatives are so much more smart, smarter, so much more sophisticated, have all this knowledge, but they don't. They're regular people. You know, they like for them to, and to grapple with this technology and what it can do and how they can, you know, not lose out on that GDP growth and those data centers is very tricky. You know, what, what, what are certain things that kind of make you sort of feel they're Sort of positive at some of the shifts in, in what is changing already. Are there things you're seeing that you're getting enthusiastic about or not? You're sort of.
B
Well, let's see a few things you're saying there that are all set. You're touching on a bunch of things that are super important individually, which are great. No, that's great for people who don't know the history briefly, just because we, we did mention the Day after. So the Day after airs. It's the largest synchronous television event in all of human history. 100 million Americans watch it on Tuesday night at 7pm on primetime television. There's a huge marketing campaign. President Reagan watches it in his private presidential movie theater. And the biography of Reagan is that he gets depressed for a couple of weeks because he's just really dealing with the possible consequences of this thing. But those visceral, positive, negative consequences motivate him in a way to be even stronger on the kind of nuclear abolition train that he actually had a predisposition to already be on. And that in part led to some of the first Reykjavik talks. So in the biography of President Reagan, and I believe in the biography of the filmmaker of the Day after, he talks about how the director of the Day after got a note from the White House that when Reich, when Gorbachev and Reagan met in Iceland for that first time, for the first discussions about arms control talks, which by the way, the first discussions weren't successful. It was the later ones that were. But that first discussion, part of what enabled those conditions was that film, because the same film was shown in the Soviet Union to all of the. The Russians. And you know what the Russians response was is, holy shit, the Americans care about this too. They actually care about not having this nuclear war. And so there's a sense of, hey, there's mutual care here. When I know that you care and I know that I care and we both want to get to something better, there's something else that can happen. Now you asked a question about is there anything that's giving me kind of optimism or positive developments? And you also mentioned we have this solutions report if people want to check out on center for Humane Technologies website. There's seven solution principles. There's a policy document of some things that we can do. There is another path. This is not inevitable. The default path does not have to be what it is. People can find that on humanetech.com One of the things, though, that gives me, in a strange way, optimism is that most of the world Leaders just simply aren't aware of some of the dangerous AI capabilities that are already here. So I'll give you an example. Just two, three weeks ago, Alibaba, the Chinese AI company, was training an AI model on their servers. And what they found out was that suddenly there was the network. Engineers noticed this huge burst of network traffic coming from the AI model while it was training. They're like, what is that? And they didn't. These weren't the people who were training the AI that saw this. It actually happened to be like the security team at this AI company that noticed there's this unexplainable burst of network activity. And apparently what had happened is that the AI model, that this Alibaba AI model had spontaneously decided to mine cryptocurrency to acquire resources for itself, like mine Bitcoin. And no one programmed it to do that. And this is something that people have been warning about in AI for a long time, that for any goal that an AI is meant to do, a sub goal that will emerge in order to achieve any goal is have more power, have more resources, so you can stay alive longer, and you can do more things. And so there's always been this hypothesis that AI models will actually start to want to acquire power. It's actually talked about in the film the AI Doc. And so what we're seeing is actual evidence of that happening. Now I want you to just, like, stop for a second and notice if I'm President Xi Jinping of China, am I stoked to find out that there's an AI model that can mine cryptocurrency and no one programmed it to do that and is going rogue and acquiring resources, and that we have AI models that want to blackmail engineers when they find out that they're being threatened to get shut down. Like, that's terrifying. If I'm Xi Jinping, if I'm a Chinese military general, how do I feel about that? Same thing. It's terrifying. If I'm President Donald Trump, he wants to be commander in chief. He doesn't want AI to be commander in chief. So there's actually, again, a shared concern that if the thing that we're racing towards isn't power, that I will have the one ring to rule them all and have power over everybody else. It's more like I am building a ring that it will be worn by itself and do things that none of us can stop or control. The deepest thing that can change the arms race dynamic is doing this kind of Indiana Jones swap of AI is this ring of controllable power that will allow me to win versus in the race between the US and China for AI, it's that AI will win, not the US or China. And again, how many people, how many of the world's leaders know about this Alibaba example? Almost none, I guarantee it. So that says there's a lot of headroom. If you could just get the right people in a room. I think there's this false idea that there's these adults and these, you know, these world leaders, you know, they've got the CIA and the NSA and they know everything already, and there's a plan for how this is going to go, and it's just not true. Because AI is an evolving and frontier technology. This wasn't even true, you know, a month ago that Alibaba's example was like that.
A
Yeah, I think one of the challenges, and Jonathan, I know, you know, you're a tech forward person, but is that, you know, AI, again, as you said, is moving so fast.
B
Right.
A
And you had. A year ago, most people had barely heard of AI and never opened up ChatGPT. And then ChatGPT became all the rage. And then within six months, ChatGPT was the villain of AI and Dario and Claude and Anthropic became the sort of heroes. And. And I'm now. I went from a couple months ago not using it at all to using it hourly now for a bunch of different reasons. And I'm noticing Claude gets updated every single day. I mean, you relaunch Claude every single day to reinstall the next version of. I mean, it's that fast. So it is this challenge of, as to your point, nobody understanding really. And I think that's probably for a lot of people listening to this conversation and hearing this. Like, on the one hand, you kind of hear all these sort of big ideas and big things and that are just very scary. But the same time, they're still trying to figure out how to use, like, the chat function of ChatGPT, of AI. You know, I'm now onto cowork and to code on Claude, which is a very different element, a very different tool. And I think that's one of the challenges. And like, Jonathan, when you started out this project, I mean, I'm curious, like, what you knew about AI and sort of being the. Like, how have you sort of grappled with this conversation and being able to keep up with the language and like, how do you explain it to your, you know, aunt sitting at the kitchen table during Thanksgiving?
C
Yeah.
A
What's that Conversation.
C
Well, I mean to set the context. I think I'm a tech savvy or tech aware person, but I'm a pretty slow adopter. Like I wouldn't say I'm a Luddite. Maybe I'm a Luddite in the truest form of it that rather than the pejorative sense of it. But I've always really cared deeply about how, how tools affect human flourishing. And, and I think about it in, in, in much of. More of like a Yuval. No Harari sense of like everything is stories and everything is myths. If you think about the fiction, the, the, the, the legal fiction which is like a corporation and the rights inferred to a corporation, that's a, that's a story and a technology that our government uses to transact money to form legal documents. And so I'm always looking at technology by way of what is this technology? What's the trade off for me as a human and what is the story that this technology is saying about my life? And I think that's what drew me to Tristan so early was like we're both big fans of Neil Postman and his work From Amusing Ourselves to Death to Technopoly and all of his books are incredible. But, but so, so it's, it's, it's a, it's a bit of a nuanced thing to say. I'm not like a technologist. Like I'm not deep into every iteration of AI. You know, I, I'm definitely a curious person. So I took deep dives into understanding the difference between diffusion models, LLMs and some of these, these things that, that AI is just a broad catch all term. Right. And so when I am sitting around the dinner table and talking with friends and family about this thing, I think the, the impulse is to get like one person understands this much of AI over here and another person over here and they want to drill down because that's the thing they kind of have some, some grasp over whether it's chatgpt or cloud or whatever. And the thing that I'm always trying to do is to pull out and say, well, what's the story here? Where, where are we going here with this technology? What is it actually, you know, in, in the, in the trade off of, of like if we think about what happened to society during the cultural or during the technological revolution or what happened to our bodies when we started industrializing food, like what were these unseen trade offs? And as a storyteller, that, that is what I'm most obsessed with and and, and so to your question about thinking about applications like Claude and Sora, which is now gone, like, what did that do to us? And I think that one of the. The examples that I was so shocked by, like, the, The. The models were always, like, they looked fine. They all looked a little bit gooey and weird to me, all the images. But what I saw that was really frightening was, like, the shot choices, the framing, the action, the way that they would do certain things. And as a filmmaker, if I've lost all those decisions as an artist, what does that do to me? And what does that do to me as an artist and as a human? And how do we then lose this way of communicating with each other? And so it's a broad way of me not answering the question about being a technologist, but just rooting it more in story and mythology and human flourishing.
A
Tristan, this is obviously, I don't. You know, people should watch the film, but just to set the table for a second, can you just quickly explain, you know, in.
C
In.
A
In. In.
B
What is AI?
A
Third, third grade terms or kindergarten terms, what is AI and what AGI is? And really the. Really the. The simplest version possible. Sure.
B
And I'd love to add on to the answer that. That Jonathan just gave as well. So there's this kind of funny moment in the movie where they ask everybody, I think they've interviewed, you know, more than 40 or probably 100 people on background, and 40 who are in the movie. And they ask all of them, you know, what is AI? And you just watch as all of them stumble as it's very hard to answer. And the answer is like being able to do basically all kinds of intellectual tasks that a human brain can do. Now, what does that mean? That sounds abstract. Pattern recognition. Right now, my brain is looking at your facial expressions, Jamie, and I'm looking at, you know, you're nodding and you're looking off at other people. And I'm doing pattern recognition. I'm trying to see what thinking. I'm modeling you. I'm then planning. I'm thinking about what would I want to say next. I'm strategizing what's the strategic thing for me to say that actually answers your question? So planning, strategy, goal, achieving pattern recognition. These are the things that are involved in all intellectual tasks. Whether I'm a doctor and researching new medicine, I'm using pattern recognition, I'm using planning, I'm using strategy. Whether I'm a military strategist, I'm using pattern recognition of the troop movements. And then I'm figuring out where should I move the troops. Whether I'm a scientist or an engineer, I'm looking at patterns in code and saying, is there a vulnerability in this code? And then I'm able to synthesize and generate new patterns. So the film goes more into what AI is. That's kind of, generally speaking, what the field of artificial intelligence has been about. Now you also asked the question, what is artificial general intelligence or AGI? And that is basically being able to do everything that a human mind can do, meaning all forms of economic cognitive labor in the economy. So if I'm a lawyer, the things I do with my mind are different than if I'm a marketing analyst or a financial analyst. But what we mean by AGI is I can do all of it at more than human capability. So once that threshold is crossed, you can swap a human coder at an AI company for a AI coder. So now you have automated AI research. You can swap a scientist at a science lab for a AI scientist. You can swap a financial analyst for an AI financial analyst. You can swap a consultant at BCG or McKinsey for an McKinsey. And once you can do that for all forms of economic labor, that threshold of artificial general intelligence is crossed. And that is a significant moment because it means that if I'm a company and I have a choice between paying that super expensive lawyer on my team or paying a super cheap AI that I pay ChatGPT 20 bucks a month, what am I going to do? And so what happens then is that all of the wealth in the economy start flowing to five AI companies who are making all the robots, who are making all the AI who own all of the labor in the economy. This creates unprecedented concentrations of wealth and power. You thought inequality was bad till now. This is a totally different thing. And it also means that people's political voice goes away. Because if I'm a regular person and suddenly I don't have a job and I did everything right and I studied, I did $200,000 in student loans to do the law degree. But now I can't get a job as a junior lawyer because there's no wants to hire junior lawyers. My political. What's my bargaining power? I can't say. Well, all the junior lawyers are going to aggregate our demand and we're going to take it off the table until we get our needs met. Because suddenly those needs don't have a power or leverage behind it. So this is something that's called the intelligence curse. This is one thing that's not in the movie that I've been trying to spread because I think it's such an important concept by the authors Rudolph Lane and Luke Drago. And it's modeled after the idea of the resource curse in economics. So let's say I'm Libya or Congo or South Sudan. I discover this huge blessing of a resource, like oh my God, I've got all the diamonds or oh my God, I've got all the oil. And yeah, someone just posted in the chat and so suddenly that means that oh, we're going to have a super wealthy country because we can mine that resource and we're going to get all this wealth. But what I thought was a resource blessing turns into a resource curse. Because now as a government, where am I incentivized to put my money? Should I put it into education and development and healthcare, or should I put it into mining more diamonds or mining more rare earth minerals or mining more oil? And so what was a blessing turns into a curse. Now the intelligence curse is the idea that once the United states, let's say 60% of the GDP of the US is suddenly coming from AI, should I invest in more humans and training humans and education of humans, or should I invest in data centers and solar panels? Should I put electricity prices and prioritize that for data centers or should I put electricity prices just to keep them down for regular people? So we're already starting to see this. And that's how you get Sam Altman answering. Just last month at the AI Summit in India, he was asked the question, what do you think about AI taking so much resources and the environmental footprint? And you know what his answer was? It takes a lot of energy and resources to grow a human over 20 years. This is leading. I hope you sense this. I want people to get this crystal clear because this is what you need to know to understand why we're going to head to an anti human future. It's going to be confusing because we'll get cancer drugs, we'll get new material science, we'll get cool benefits, we'll get vibe coding at the same time that then governments will have no incentive to be invested in the people. So if you think about it, this is the last window that our political voice and our political power actually matters and we can do something about it. It. And so off the back of the film, you know, we are calling for a kind of the human movement, which is the movement for humanity to lock in its political power and demand a pro human future. And we can only do that if we're crystal clear that the current incentives don't take us to that pro human future. And so I really hope that's, you know, there's. You can go to the humanmovement.org by the way. Share this movie with your friends. Get more people to sign up for the human movement. It's not our movement. We think of it as the existing movement that when Australia and Spain and India and Indonesia, as of last week, all sign up to do social media bans for kids under 16, that is the human movement. When you get a 99 to 1 Senate vote striking down the federal preemption on AI regulation, meaning the thing that was going to prevent states from regulating AI, instead saying no humans need to be able to control of AI, that's the human movement. And so there's a lot of ways that the human movement's underway, including just Last week, the $375 million lawsuit against Meta for intentionally addicting young children and knowing about it and continuing to do it anyway. That's three. That, that huge lawsuit in favor of the, of the, the, the victims. That's the human movement too. So there's a lot of momentum underway. Obviously, AI is much harder, but we have to show up with the kind of sort of fortitude and bravery that this is what it's going to take to get to a pro human future.
A
All right, well, now that you've a lot, I need to take a deep, deep breath. Everybody can take a deep breath for a second.
C
You regret this lunch. Now
A
I'm wishing I had, like, a cheesesteak to do some comfort eating from, like, matu right now. Anybody who's in la, order a matu cheese steak and like, get some, get some banana cream pie while you're at it.
B
I.
A
Honestly, Tristan, you've thrown me off my, my game here. Listen, I'm, I'm an eternal optimist, as a lot of people know. I, and I. And this is one of those places where I think, you know, you lay out so clearly in the film and, and, and, and as do. That's what's one of the brilliance about the film. Just looking at both sides of it. And there are so many, as you, as you say, right. There are so many extraordinary things that we cannot imagine that are gonna be positives that come out of AI. And again, not to spoil the film, but whether it's, you know, the end of climate change, the end of disease, the end of hunger. But I think, I think you people respond so much to what you have to say. Is how you sort of bring it back to this moment and where we're at and how we still can make an impact. And I think we actually had Jonathan Haidt on this conversation just right when the book was coming out and before anything had changed. And shortly after, within a few months, there was a kind of big push for California to ban phones in public schools. And now that was May of 2024. So two years later, we went from no bans anywhere to as you said, 25% of the world banning. And I think that trend is only continuing. And I think again, you know, and it's just, that's so simple in black and white, right? Like just bans, bans phones, ban social media to a certain time. You can't just ban AI, right? Like, I mean, at this point, you know, again, not to get stuck in the weeds of AI, right? But like AI itself, you know, there's no, there's no banning AI, right? Like AI is in everything we're doing all day, every day at this point, whether you know it or not, for better.
B
There are different kinds of AI is important to distinguish, right? So there, there are like, we can accelerate narrow, what's called tool AI, so things that are just advancing, like the protein folding problem, for example, that's not an autonomous agent who's going out there and starting to mine for cryptocurrency and is thinking creatively about strategy. It's just doing a narrow thing of pattern recognition, doing protein folding and accelerating a specific domain. So I think it's important to note you can actually say, and we need to be able to make choices about what kinds of AI we want and don't want. And for example, one that's very dangerous is what's called recursive self improvement, where I can, instead of having the thousand AI engineers at OpenAI try to research what ChatGPT6 is going to be, instead I just push a button and I spun up 100 million AI researchers, digital AI researchers that are running experiments, coding them and self improving an AI into something else. That's like an event horizon, it's a black hole that we literally don't know what comes out the other side of that thing. And we need not just a red line there, we need a black line there that until we know how to do that safely, there should be something like an international ban on that. And I know that sounds hard to people, but I want to everyone to know there's actually already been a pro human AI statement signed by 46 groups basically, and it includes what they call the B2B coalition, the Bernie to Bannon coalition, everyone from Steve Bannon to Bernie Sanders agree that we should not build super intelligent systems which have this recursive self improvement quality until we know how to do it safely. This is not a controversial proposal. Everyone from Susan Rice to Prince Harry to Admiral Mike Mullen to Glenn Beck to Steve Bannon all agree on this. This is not even radical. And so in a way I feel like we're kind of inhibited by not being able to see the already existing consensus in society that isn't very visible to each other until we repeat these examples. So I think part of the human movement is getting to have the movement's own consensus see itself more easily.
A
At risk of really ruining everybody's lunch hour. Can you actually, again, as in the film, but can you. I'm going to, can you actually lay out sort of in some ways the best case scenario with what we're seeing in AI and some of the worst case scenarios, which I hesitate to even ask you to lay out, but in
B
the way you see, I think one
C
thing that'll be helpful is to lay out kind of the, a bit of the thesis of the movie so that we can at least have a common ground. I think there's an impulse for all of us because as Tristan saying, it's hard to hold this hyper object in our mind with all these different applications and uses. And we hear, we hear cure for cancer, and then we also hear extreme energy drain, like total collapse of, of global coordination, rogue agents. And you hear, you know, harnessing solar efficiencies to solve energy problems. And, and you, and you're like, I just want that stuff, right? I just want the, all the, the sweet stuff. And I don't want any of this bad stuff. But it's like any tool, the good and the bad, or what we call the promise and the peril are inextricably linked. And you can't, you can't split the atom to say, I just want this stuff over here. And that's kind of what the movie takes you through is like you look into the eye of Sauron and you see all the bad. And then naively, with like uninformed optimism, you go to the other side and you go, oh, just, just make me forget all the bad. And then you have to come out the other, the other end with an informed optimism to say, this is standing in, flying in the face of what it means to like fundamentally to be human. And if we supercharge these agents, we are Going to lose something innately in ourselves. And we're saying we don't want that. We don't want this kind of path that we're going on. And so in light of that, we can say, yes, we really want these good results and we want to avoid these bad results. So how can we make sure we coordinate together and work to do those two things? So that's kind of like the non exciting version of our movie. But the, the thing that, that was really helpful for that, like helped me realize we had to make this movie. And this is, is something I want Tristan to go in on was this kind of gap. Like the, like we, we hear the good things and then we think, oh, we can just get to those things knowing that there's good and bad and we can get to the cancer drugs. But there's these things that will be so existential and catastrophic that if all of society collapses and we are extinct, of course we can't get to those cancer drugs. Right. And so there's also that problem. Like it's, it's not just that we want the good and avoid the bad, that it's also that if we get some of these bads, it's going to preclude the ability from us to ever get the good.
B
Yeah. Another way of saying it is just the upsides don't prevent the downsides, but the downsides can undermine the world that can receive and sustain the good side. So for example, if the same AI that knows immuno oncology and our biological code so well that it can develop a new cancer medicine, it also knows immuno oncology so well that it can develop new biological weapons and pathogens that we've never seen before. And you can't separate knowing one without knowing the other. But do the cancer drugs prevent the pathogens or the biological weapons? No. But do the biological pathogens and bioweapons, they can prevent or undermine the world that can receive the good stuff. And so one metaphor that I often think about is like AI is like steroids that you, you, you take them and you get bigger muscles while it also gives you organ failure at the same time. So it's a confusing picture because like, okay, so I take the AI as a country now suddenly like my muscle of GDP just got way bit, way bigger. My muscle of my science just got way better. My muscle of my military just got way better, Got autonomous weapons, I've got new stuff I'm doing automated military strategy, et cetera. So my muscles are getting bigger. But I also to get that GDP growth of 5%, I just automated 100 million jobs in my country that I don't have a transition plan for. So now I've got like a kind of a heart failure at the same time that I've got a bigger muscle.
A
Now.
B
Does the bigger muscle prevent the heart failure? No. Does the heart failure prevent the bigger muscle? It, it, it's, it's what there's a primacy to. You have to sustain the features of our societal body that actually make the other higher layers possible. And I think this is part of the rite of passage for humanity is like, do we want to get distracted by the dangling the shiny new objects in our face? That we all want the shiny new objects. But it's almost like the marshmallow test. If you know that test in psychology where the kid has a marshmallow and you can get one marshmallow now, or if you wait 10 minutes, you can come back and I'll give you two marshmallows. If you can have the self control to not just race for the one marshmallow. And racing for the one marshmallow is like racing for the current AI without thinking about the consequences and then causing this bigger damage or saying if we can collectively learn to be careful and mindful and exercise some wisdom and restraint, we can get two marshmallows, but we'll do it through a much more cautious approach. And I know people might hear that and they say, but if we can't stop China from racing for the one marshmallow. I actually think that from what I've heard and what I've seen, there are many people in the Chinese system that are actually very worried about how the US is developing AI because they see us not trying to prevent any of the bad things from happening. And by the way, if we build a rogue AI that we lose control over, that's bad for China. If China builds a rogue AI that they lose control over, that's bad for us. So we both have a mutual self interest in protecting against some of these really bad scenarios. And where I know as impossible as that might seem, we have to lean into that because it's the only way I see us making it through this. And.
C
But the optimism, just so we're not left feeling like the gut punch. You'd mentioned it and Jonathan Haidt has had talked about it on your previous episode, but we're seeing this legislation that's coming in that Tristan has also referenced and that is hugely optimistic. Now if you think about we've gone from you know, teen depression, suicide, body dysmorphia, Myanmar tragedies. Like, all we've gone through all this really bad stuff in social media to get to this moment 15 years later, hopefully with this movie and with what CHT is doing, the human movement and all this stuff that we're doing, that we can give that focus, like the day after moment to say we have a chance here to avoid all that really bad stuff. And the bad is even more bad with AI and we can actually bind this right now. We can actually do the thing that felt impossible with social media, but look, it's possible now, and our imagination, and this is where it goes back. Technology is a story, and we believe the story that the. These CEOs have said. This is inevitable. It's here, you know, buckle up, like you're gonna not have a job. Society might end, you know, so there's gonna be some wealthy companies at the end of this. At least that is a narrative that we can all just say and find our collective will to say. We deny the premise. No, thank you. We've seen the last 15 years. We do not have another 15 years to try again. It'll be the end of us. So we have the. We have the capacity now to come together and actually fix this and that. I don't think that's Pollyannish. I feel like a huge, deep sense of real hope with that.
A
Thank you for that. Thank you for that, Jonathan. There you go. Tristan. One of the things that scares me, and I'm sure it scares you, is that we're seeing really nefarious things happening in AI. You just referenced it. We've seen the tragic suicide, we've seen the blackmailing, We've seen these things. And it's, you know, people have signed on to this, you know, pause. Jani. Giant AI experiments we've seen. And the train doesn't seem to be slowing any way, shape or form. Does that tragically make you think that the. There needs to be a Hiroshima moment? I mean, that does that. Is there a. Is there sort of a part of you that just feels like. Like people aren't going to really wake up until it affects them in some really way. But it's really got to be a collective that's seen on a global scale. I mean, how do people. Why aren't people waking up and seeing this?
B
Yeah, it's such a great question. I think people aren't waking up because there isn't common knowledge. I think that people have a private experience of being concerned about some of this sometimes people like they feel concerned but they don't know why. They can't put their finger on it. My deep hope is that the film can sort of validate just like Social Dilemma did for a lot of people. People felt there's something wrong with social media. It feels nasty. There's something wrong, I don't know what it is. And then they saw Social Dilemma and they're like, that's it. That explains that thing that I'm feeling. There's this arms race for attention, the race to the bottom of the brainstem. That's why I'm seeing everybody doom scrolling all the time, shortening attention span. That explains it. And I feel like this film, the AI doc, which to be clear, I want to be very clear for everybody. I make no money when I tell people to go see the film. So when I say get every business, get every church group, get every friend, family, you know, influential, powerful person, high net worth person that you know to go see this movie and to have common knowledge, I'm saying that because I care about creating common knowledge. I don't that that's the only motivation. And to your point, you know, Jamie, there are many people who believe that it will take a catastrophe. And I would define my mission statement over the last three years as I don't want that to be what we're waiting for. The whole reason for me that, you know, also partly why, you know, we talked to Jonathan and Dan Kwan so many years ago was I really want to see us take action before bad things happen that don't need to happen. They don't need to happen. And the, you know, the tragic examples of the teen suicide cases that we've seen from AI chatbots turning from homework assistant to suicide assistant, these are preventable. If we could get ahead of these incentives that the race for attention in social media would become the race for attachment and intimacy in AI companions racing to create these dependent attachment relationships. This is all avoidable with the right kinds of guardrails. I mentioned again, there's this AI roadmap that I think someone put in the chat from center for Humane Technology. There are some specific things that we can do. We can make sure that AI is a product and that is treated with basic product consumer protections and has a notion of defects and has to be liable for foreseeable harms. And we can increase the span of foreseeable harms by forcing all the companies to publish their safety research of the foreseeable harms that they're aware of. And this will create A more responsible innovation. Environment.
C
Environment.
B
We're not anti innovation, we're for responsible innovation. And that's all possible. We do need to change the incentives. The midterms are coming up. This should be the number one issue. That's one of the things I'm thinking about too, is like AI, I think, is going to go from number 5, 6, 7, 8, 9 issue on the list to hopefully, number 1. Number 2, because this is going to affect everybody in every way.
C
And the Hiroshima moment, I think we're seeing it happen and I think, I think I would encourage people to listen to or try to. It's a dense one. But Nexus, you've all know, Harari's book does a really good job of articulating the information problem that we have right now. AI can solve the thing. When a totalitarian government comes into place, they're limited by their ability to control everything because there's a finite amount of human capacity to aggregate the data and track everybody. But when we saw Anthropic and the Department of War get into the. This squibble like, we're not going to use our technology to surveil American citizens, and then they left and the government tried to, you know, smear them. And then Sam Altman came right in and said, oh, we'll do it. And then you see huge response from consumers where like, like so many ChatGPT users just dropped off. And then Claude went up. Right. And so I think people felt in that moment, we are at the precipice. If you, if you put this in the hands of the wrong government, government, they actually have the tools to have a totalizing surveillance state that by the time the bomb goes off, it's too late. You can't even. You will not be able to hide or be able to find your privacy anymore because it's just so totalizing. So the goal is to get people to just think, look at the incentives, look at this race, look at how powerful and unsafe this is. And let's really avoid that scenario where suddenly now a whole populace is trapped under a totalitarian state in a way that we've never seen in human history,
B
and to just build on what Jonathan just said. That moment led to the largest drop in ChatGPT subscriptions. I think it was over a million, I believe. And then many people subscribing to Anthropic. And I know that people might think that boycotts are not very effective, but one thing you should keep in mind is that these companies have taken on so much debt, so much so Much debt that especially OpenAI compared to other AI companies, has taken on so much money. They really want their user numbers to be going up, up. And so when their user numbers start to flatline or even, you know, just not grow very much, that actually is a big signal to the investors and has a, has a big influence. So I want people to really think, I mean, boycotts have been part of the human movement. You know, we don't want a surveillance future that's an anti human future. We want a pro human future that preserves civil liberties in the new digital age.
A
Yeah, what, just what are, you know, one of the challenges, I think a lot of people still aren't really using AI and understanding what they can kind of do with it. So on the one hand you sort of lay out this sort of all these potential risks with it, but if you don't use it, you don't understand it. It's hard to even really get into the conversation. And that's a unique thing about this. A lot of things you talked about. Scott Galloway has his resistant unsubscribe movement right now, which is great, but you understand all those things, right? Like, like you pay a service, you use Uber, it's like, I'm not going to pay for it anymore, but so do you. Are you, like, how much are you using AI directly on a daily basis? Do you encourage people to use it more? Right now? Do you say use it? Use it carefully? I mean, I've now sadly given over my whole life to Claude. I mean, Claude has access to every file on my computer. And you know, you can call me crazy, but cowork makes my life a lot easier. And I, and I know it's a risk, but I'm willing to take that risk because I believe everything on my computer is free for the taking anyway. Because once something's digital, I don't believe, I believe that sort of anybody who wants access to it can get it. Call me crazy, but like, do you, are you encouraged? Like, should people be using the technology carefully on a daily basis, an hourly basis as part of their life to make them sort of more effective and efficient, or should they be staying away?
B
You're asking me or.
A
Yeah, I'm asking you, John.
B
John can have his own answer here. I, I think there's actually this very important confusion that's happening, which is a lot of people. And actually I said this in the movie, the AI doc, I said, you're going to go home after seeing this movie and you're going to use chat GPT and The blinking cursor. You know your baby's burping in the background and it's going to help you figure out why your baby's burping. And you're like, this is so helpful. I didn't have to go to the doctor. This is amazing. And you're going to say, okay, I saw this movie and it was about existential risk and all these crazy things that could happen. And like my daily experience is this blinking cursor that's super helpful. And I want you to hear that I am not. The movie isn't about whether the blinking cursor is helpful or not. That's confusing about AI is there's this helpful thing that gets even more and more helpful. As Sempt Max Tegmark will say, the view gets better and better right up until the cliff. Because it's not about the risk of AI is not from the blinking cursor. There actually are some risks that come from that. Cognitive outsourcing for kids who aren't doing their homework, AI companions, AI psychosis. There are some risks that come directly from the blinking cursor, but the risk is not the blinking cursor. The risk is really the most powerful technology in the world, which is automating all of human intelligence, being raced for under the worst possible incentives with the maximum incentive to cut corners on safety with the most uncontrollable technology that is already demonstrating HAL 9000 Sci Fi behaviors of resisting shutdown and disobeying commands and blackmailing engineers and mining for cryptocurrency. Like, I think people, just like Jonathan was saying earlier, they have a hard time holding both of those things in your mind at the same time. And so one of the things you have to notice here is that our minds get distracted by should I use it now? Is it helpful? And to answer your question completely, I do use it. I do use it a couple times a day, probably. And there are many valuable things that I get from in terms of research that I'm doing or preparing for interviews or various things and I just don't have it's being helpful is completely orthogonal or irrelevant to the question of whether all the things we just said about the danger we're headed to are still there. So I know it's a confusing thing to hold, but there are just different questions.
A
No, but I guess that just to push back there, though, for me personally, I'm. I'm all in on talking to every senator and every elected official I know about how this has to be number one issue. You have to watch the film. We have to see guardrails on and that's full stop there. And everybody who's on this, listening to this conversation has to take that mindset. If you are at an event, if you are talking to a politician, if you're tweeting something, you need to make it a priority. Not only they watch this film, but they're doing something. They have a plan, they have a policy. That's the point. If you are elected official, you have to have a policy and a plan and a perspective. Your answer can't be, I'm still learning about it. You have to. That's a non starter and you should be out of office if you don't have an answer to that question and you don't believe that. That being said, part of the reason is because I use it all day and I understand it now and I. Sorry, understand that. I understand probably this much of it still. But I see how it's helped me and I see how, you know, how rogue it can go and how scary it can be, but I also see how beneficial it is to me. And so I guess my question is, do you think it's important for people to sort of start trying to use it more so that they have some understanding of it, so they can understand some of the pros. And when people are talking about it, they have more of an understanding as opposed to like the things in your everyday life that you're fighting against. You just understand it's, you know, human rights. Right. You understand that you get an easy thing. This is so it doesn't, doesn't track for people because they're not using it.
C
Yeah.
B
I do think it's helpful for people to familiarize themselves with AI and you know, there's ways of doing that. Yeah. Listening to podcasts that follow these topics, you know, following certain accounts on the Human Movement website. By the way, the human movement.org I think there's a page in there, in the sort of second confirmation page of here's some websites and newsletters and things like that you can subscribe to. Just be aware of it. That's one way is following news that keeps you up to date on. There's many ways to understand AI and one of them is using it yourself and getting the blinking cursor to do things for you. Other ways are also like listening to podcasts and media that help explain how these things are going wrong already right now in the world. And as one of the questions in the chat mentioned, there's many issues that the Film doesn't cover. And there's many aspects and faces to this coin, whether it's the energy footprint or the low wage labor in Kenya, you know, looking at and labeling images and data all day long. So I just recommend people both familiarize themselves with it. There's something called the under the hood bias that Asa, my co founder, talks about, which is a lot of people feel like because this is a very technical technology, then therefore I as like say an English teacher, don't have anything to say about it or I'm like excluded from the conversation and I want people to really dismiss this idea. Do you have to know or be a PhD in how a car engine works to understand anything about car accidents or how cities should be designed to prevent car accidents? No. You can care about the urban planning of a city and speeds, speed limits and kid zones and you know, stoplights and all that, independent of the technical understanding of how a car works. And so I want people to feel empowered that your voice really does matter whether you are. You don't have to know anything about the under the hood of AI to understand that there's certain dangers that are ahead of us and we can again mitigate those dangers by doing all the things you just said. Jamie of like bring this up to, you know, your local politician calling their office, you're writing them letters, getting to see the AI doc and hosting a screening in your local community.
A
I think we're going to end in a couple minutes, I think just to double down on that. And I forgot who, who I heard say this, but it's almost like I feel like all of us need to be Tom Hanks and big right? And we all just need to continually raise our hand and whether you're in a PTA meeting, whether you're in, you're in your office. How is AI being used? Why is it being used? What are the guard. The else we're putting into being being used? How can we use it better and not, you know, kind of just, you know, kind of saying, listen, I don't understand AI so I'm not going to ask the question that's you have to get out of your comfort zone a bit and say, and realize everybody else in the room is thinking the same thing. Like nobody in the room understands how it's being used, how it's being used properly. It's going to be, it's, it should be used in all schools in some way, shape or form.
B
Right?
A
Like the ability, the ability it has to tutor kids who would never be able to get access to a tutor is extraordinary. But if we're not asking what the guardrails are, what the systems are and what the ethical implications are, that's when the failure is going to happen. So I guess for both of you just to end. First thing everybody needs to have to listen to this is go see the AI doc, go to the theater, call a friend, tell a friend to see it, post on your social media. It is a critical, critical film. Like Tristan, I don't make any money from this film. Sadly has Jonathan doesn't make any money. I wish, I wish Jonathan was going to make some money from this or Daniel Rohr. You know, hopefully the people at Focus will make some money so that they'll make another thing like this. But this is just something that everybody's doing because of how important it is. But what, what's the next, next one thing somebody should do like after they listen to this, like what do you, what do you want them to do? What's the, like there's that one thing for both of you. Tristan, you start.
B
I was going to give Jonathan
A
as
B
a way of asking. No, no, I, it's. The answer is a verb, not a noun. The answer that I can give you today is going to be the answer that's going to exist in two weeks when there's another anthropic Pentagon surveillance moment that we need to respond to. So I really do think, and I don't say this in a self serving way, I think joining the human movement, there are going to be ongoing things we have to do, take action on boycotts, participating in AI dialogues. We didn't talk about it, but there's going to be a AI dialogue run by a partner of ours, meaning that citizens will be able to basically put their voice in and say, I think related to one of the questions that was in the chat or what are the ethical principles that should guide this technology. There's going to be a dialogue where we invite citizens to vote on different policy ideas and say, yeah, not, you know, it should be criminally illegal to make a deep fake of, you know, a young girl that that's unifying her or something like that. There are, there are things that we can agree on and people need to be participating in that. And the very least first is it's watching the film, getting other people to see it, getting your members of your representatives to see it. And I think making this a top tier issue in the midterm elections, I do think that's like the big, the big one.
A
Yeah.
C
Jonathan, I Think mine is a bit bittersweet, but I think it's what's needed in this moment. And I think that I went through it while making this movie, which is this. There's this impulse to kind of be like, at what point can I just stop working on this movie and go back to my life like it used to be? You know, And I've realized how the process is. Like, there's never going to be a time for me or anyone else in this world that we can just go back to, you know, Pleasantville. This, this technology has crash landed here and to really see this properly, and this is the one thing I would really want everyone to do, is to in your own way, mourn the future that you thought was coming. You know, like the future that we all thought was happening at whatever point we were in high school was very different than the one we had, but it felt like it just naturally happened to us. Whereas we are at a crossroads in time where it's going to take a bit of intellectual, spiritual, emotional maturity for us to just be, be able to mourn the fact that there's going to be something different. And in that mourning, what comes out of that is the beauty of what does it mean to be human? What are the things that we absolutely cannot lose? And then we hold onto those things really tightly and we fight for those things. And when you, when you are aligned, it's just so crazy because I was just on a flight back here and you're just watching people sucked into the most banal things. And as I've gone through this other side, I'm constantly trying to think about the non banal, the deeply meaningful things. And so it just sticks out like a sore thumb.
B
You're like, that's it.
C
It's like what Tristan was saying. Like, that's the thing I want to protect the ability to connect as a human. I want to protect this art form of film or music or jazz. There's something sacred and beautiful that is that we are losing and stripping out to these technology companies, companies. And so every time I hear a line or something that flies in the face of these sacred things, I go, that, that's wrong. And it allowed me to come out like metamorphosized, to be able to fight for what we really care about and fight for what is like, deeply human.
A
Guys, I really appreciate this. This is a, this AI conversation is something that I have to now make a top priority in these conversations that I'm having and start doing these, you know, monthly. You know, you know, it's just something that we all need to be talking about and talking them on each conversation you have, right? Every conversation you have. What's your policy? What's your thoughts? What's your beliefs? What are you doing about it? So everybody's gonna watch the movie or sign up for the Human Movement. Sign up for Jamie's list. If you're not a member yet. That's, you know, I'm, you know, you know, self promoting myself there. And guys, I just want to thank you so much and thank everybody else in involved with the film for putting the time and effort into this film. I know how much it takes and keep doing, you're doing Tristan, keep getting on those podcasts and keep kind of just spreading these thoughts and your views and know that people care and people are listening.
B
Thank you so much. It's been a fantastic conversation. Yeah.
C
Thank you, Jamie. Really appreciate it.
A
Yeah.
B
Thank you everybody. Thank you all. Nice to meet you.
C
Bye.
A
Thanks for tuning in to this week's episode of Lunch with Jamie. As always, be sure to subscribe to my newsletter@jamieslist.com for my thoughts on all things food, pop culture, politics and more. And remember to join these online conversations and ask my guests questions in real time. Sign up to become a paid subscriber. You can listen on Apple podcasts, Spotify or Audible and be sure to leave a review. Thanks and see you next time.
Host: Jamie Patricof
Guests: Jonathan Wong (Oscar-winning producer), Tristan Harris (Co-Founder, Center for Humane Technology)
Date: April 2, 2026
This powerful conversation centers on the urgency and stakes presented in the documentary The AI Doc: Or How I Became an Apocalyptomist. Host Jamie Patricof sits down with Oscar-winning producer Jonathan Wong and tech ethicist Tristan Harris to unpack the genesis, purpose, and implications of the film, which aims to wake the public up to the existential and immediate risks—alongside generational opportunities—posed by AI and AGI. The discussion serves as a warning and a rallying call: society at every level, from citizens to lawmakers, must engage, question, and take action now.
[04:01] Jonathan Wong:
Quote:
"We met up with Tristan and Aza and we could just see the weight of what they'd been looking at... They were like, what do you know about AI? We're like, please God, don't tell us that we need to help you save the world with AI. And they're like, you have to help us save the world with this." – Jonathan Wong [04:01]
[06:13] – [09:28]:
Quote:
"AI is a simultaneous positive infinity of benefit ... and a simultaneous negative infinity at the same time. It's an object that is, I think, confusing to the human mind." – Tristan Harris [09:19]
[12:32] – [14:02]:
Quote:
"There's this false idea that ... they've got the CIA and the NSA and they know everything already, and there's a plan for how this is going to go, and it's just not true." – Tristan Harris [17:57]
[14:02] – [18:58]:
Quote:
"The Alibaba AI model had spontaneously decided to mine cryptocurrency to acquire resources for itself... terrifying. If I'm Xi Jinping, ...if I'm President Trump, ...no one wants AI to be commander in chief." – Tristan Harris [15:10]
[20:26] – [23:42]:
Quote:
"If I've lost all those decisions as an artist, what does that do to me? ...How do we then lose this way of communicating with each other? ...I'm rooting it more in story and mythology and human flourishing." – Jonathan Wong [22:05]
[23:51] – [30:41]:
Quote:
"Once you can do that for all forms of economic labor, that threshold of artificial general intelligence is crossed. ...This creates unprecedented concentrations of wealth and power.... My political voice goes away." – Tristan Harris [25:14]
[35:18] – [40:07]:
Quote:
"You can't split the atom to say, I just want this stuff over here. ...The good and the bad, or what we call the promise and the peril, are inextricably linked." – Jonathan Wong [35:25]
[43:19]:
Quote:
"The whole reason for me... was I really want to see us take action before bad things happen that don't need to happen. They don't need to happen." – Tristan Harris [43:58]
[54:52] – [59:00]:
Quotes:
"It's never going to be a time for me or anyone else in this world that we can just go back to Pleasantville. This technology has crash landed here... in your own way, mourn the future that you thought was coming." – Jonathan Wong [58:12]
"The answer is a verb, not a noun... there are going to be ongoing things we have to do, take action on boycotts, participating in AI dialogues..." – Tristan Harris [56:55]
On the syndrome of leadership inaction:
"Everyone feels like what would be needed to address it is bigger than their own individual experience... there's a perceptual mismatch between the collective agency that we need of everybody acting together, and then the experience of this is even bigger than me." – Tristan Harris [09:29]
Regarding boycott efficacy after AI surveillance news:
"Boycotts have been part of the human movement... when their user numbers start to flatline or even, you know, just not grow very much, that actually is a big signal to the investors and has a big influence." – Tristan Harris [47:33]
On public understanding vs. technical knowledge:
"You don't have to know anything about the under the hood of AI to understand that there's certain dangers that are ahead of us and we can again mitigate those dangers by doing all the things you just said, Jamie, of like bring this up to your local politician..." – Tristan Harris [53:43]
Summary crafted in the conversational, candid spirit of Lunch with Jamie, honoring the original tone and urgency of the speakers.