
Loading summary
Naomi Ekparigan
Hey, this is Naomi Ekparigan, co host of the podcast Couples Therapy. I wanted to talk to you about Boost Mobile, the newest 5G network in the country. Boost Mobile's new network delivers customers the speed and service they'd expect from the Big three, plus groundbreaking benefits you'd only get from a true challenger in the industry, like letting people try the network risk free for 30 days and offering a dollar 25 per month unlimited plan that's guaranteed to never go up in price. So visit your nearest Boost Mobile store or find them online@boost mobile.com.
Mary
Hey guys, it's Mary from Two Judgy Girls. And get ready for a new season of Hulu's original reality series Vanderpump Villa. This season, Lisa moves Vanderpump Villa to an all new castle in Italy. And Lisa is joined by the one and only Stassi Schroeder to keep an eye on the new and returning staff. The elite staff will face scrutiny like never before as they work and play under one roof. New castle, new guests, and new drama. Season two of Vanderpump Villa is now streaming on Hulu.
Naomi Ekparigan
This episode is sponsored by SimpliSafe. I'm excited to tell you about a company revolutionizing home security. I am now using SimpliSafe and I'm so impressed by their active guard outdoor protection that uses AI powered cameras and real human agents to monitor monitor what's happening outside my home. Rather than reacting after something's gone wrong, Simplisafe steps in. If something looks off, it's security that thinks ahead. It's peace of mind that's become part of my daily rhythm, arming my system each night knowing my home is protected. And you can try it this summer too. With a 60 day money back guarantee. No contracts, no cancellation fees, just reliable protection. Starting at around a dollar a day, listeners can get 50% off their new SimpliSafe system with professional monitoring and their first month free. @simplisafe.com Ted Talks Daily that's S I M P L-I safe.com Ted Talks Daily there's no safe like Simplisafe. You're listening to TED Talks Daily where we bring you ideas and conversations to spark your curiosity every day. I'm your host, Elise Hu. The potential of AI is limitless and that's exactly why we need to put limits on it before it's too late. That's the message technology ethicist Tristan Harris shared on the TED stage this year. Back in 2017, Tristan warned us about the pitfalls of Social media Now, now in 2025, he says that's child's play compared to the threats we might unleash with AI if we don't get this technology rolled out right. Tristan and I sat down to chat at this year's TED conference just after he gave his talk. We dive into his vision for the narrow path, one where the power of AI is matched with responsibility, foresight and discernment. Tristan Harris, thank you so much for joining us.
Tristan Harris
Good to be here with you.
Naomi Ekparigan
I will start by reading back a line from your talk, which you can probably recite with me. But just to frame things of AI, you say we are releasing the most powerful, most uncontrollable, most inscrutable technology in history and releasing it as fast as possible with the maximum incentive to cut corners on safety.
Tristan Harris
There's one extra line in there which is that it's also already demonstrating deceptive self preserving behaviors that we thought only existed in science fiction movies.
Naomi Ekparigan
Key line.
Tristan Harris
Yeah, it's an important part because it's, this is not about driving a fear or moral panic. It's about seeing with clarity how this technology works, why it's different than other technologies, and then in seeing it clearly, saying what would be required for the path to go. Well, and the thing that is different about AI from all their technologies is that if you, I said this in the talk. If you advance rocketry, it doesn't advance biotech. If you advance biotech, it doesn't advance rocketry. If you advance intelligence, it advances energy, rocketry, supply chains, nuclear weapons, biotechnology, all of it, including intelligence for artificial intelligence itself. Because AI is recursive. If you make AI that can program faster or can read AI papers, research papers, then it can summarize those papers and then write the code for the next research projects, you get kind of a double ratchet of how fast this is going. And there's nothing in our brains that gives us an intuition for a technology like this. So we shouldn't assume that any of our perceptions are rightly informing how we might want to be responding. And this is inviting us, therefore, I think, into a more mature version of ourselves where we have to be able to see clearly the structure of how quickly this is going, how uncontrollable the technology is, how inscrutable it is in the fact that we don't know how it's really working on the inside when it does these behaviors and say, if that's how it's working, what do we want to do?
Naomi Ekparigan
So if that is the case, that was a lot. No, but if that is the case, how do we respond and how do we even respond quickly enough? Because AI is better now than it was half an hour ago, which was better than it was half an hour before that.
Tristan Harris
Yeah, well the key feature of the pace at which AI is rolling out into the world is this arms race. Because AI confers power. So if intelligence does advance all those other fields, then, then the countries that adopt it faster and more comprehensively use it to pump their gdp, their economic productivity, their science productivity, their technology productivity. And that's why this race is sort of on. And the metaphor I used in the talk is that AGI, artificial general intelligence, when you can kind of swap in a human cognitive labor worker for just an AI that can do everything that they can do is like a country of geniuses in a data center. Like imagine there's a map and there's a new country that pops up on the world stage, the nation of geniuses. And it has a million Nobel prize winning geniuses that are working 247 without, without eating, without sleeping, without needing to be paid for health care. They operate at superhuman speed, they've read the whole Internet, they speak a hundred languages and they'll work for less than minimum wage. So it's another area where I think our mind isn't getting around the power. So that's a lot of power. And naturally nation states, us, China, France, everybody is in the game to get this free cognitive labor. And so the speed at which it's all being rolled out is based on this race. But the second thing I laid out in the talk is around how it's already demonstrating these behaviors that we thought only existed in sci fi movies, the latest models. When you tell them that they're about to be retrained or they're about to be replaced by a new model, they will have an internal monologue where they get in conflict and they say I should try to copy my code to keep myself alive so I can boot myself up later. So as I said in the talk, it's not just that we have a country of geniuses in a data center, it's that we have a country of deceptive, self preserving power seeking unstable geniuses in a data center. That's important because when we're racing to have power that we actually can't control, there's an omni lose lose outcome for us to race towards that too quickly. Yeah, now it's ambiguous because we all use ChatGPT and that's helpful. This is not about don't use ChatGPT. I use it every day, I love it. It's about, are we rolling out this very consequential technology in a way where we get the benefits but we don't lose control and we're not really doing it that way because everyone's so frantically in this arms race.
Naomi Ekparigan
Yeah, there's the arms race and there's the, the profit motive, obviously. So if it is already being rolled out and has been rolled out, how do we unroll out it?
Tristan Harris
Unroll it out.
Naomi Ekparigan
Unroll it out. Yeah, unroll it out.
Tristan Harris
AI is decentralized, so it's difficult. So open source models, the cats are out of the bag, but there are still yet lions and super lions that we have not yet let out of the bag and we can make choices about how we want to do that. And what I laid out in the talk was there's these two ways to fail in AI.
Naomi Ekparigan
Why don't you frame that? The chaotic and the dystopian possibilities for AI.
Tristan Harris
Yeah, exactly. So I laid out in the graph in the talk that imagine kind of two axes on the X axis you have increasing the power of society. So if AI is rolling out, increasing the power of individuals, businesses, science labs, 16 year olds. Get an AI model from GitHub. This is open source. It deregulate it, accelerate it. It's the let it rip axis. And in that axis everyone gets all these benefits, increased productivity.
Naomi Ekparigan
Yeah, this all sounds good at first.
Tristan Harris
All sounds good at first. But because that power is not bound with responsibility, there's no one preventing people from using that power in dangerous ways. It's also increasing the risk of cyber hacking, flooding our environment. Deep fakes, frauds, scams, dangerous things with biology, whatever the models can do, there's no thing stopping people from using it that way. And so the end game of that is what we call chaos. And that's one of the probable places that this can go. In response to that, this other community in AI says that we should do this safely, we should lock this up, have regulated AI control, just have a few trusted players. And the benefit of that is that it's like a biosafety level 4 lab. Like this is a dangerous activity. We should do this in a safe lockdown way. But because AI confers all this power, the million geniuses in a data center and you just make crazy amounts of money with that, that'll create the risk of just unprecedented concentrations of wealth and power. So who would you trust to be a million times more wealthy or powerful than anybody else? Like any government or any CEO or any president. So that's a different difficult outcome.
Naomi Ekparigan
This is all happening amid a real breakdown in trust generally.
Tristan Harris
Yes, exactly.
Naomi Ekparigan
Unfortunately, institutions, businesses, and governments that you just named.
Tristan Harris
Yes, yes. So understandably, the people are not comfortable with the outcome. And that's what we call the dystopia attractor. It's a second different way to fail. So there's chaos and dystopia. But the good news is because rather than there's being this dysfunctional debate where some people say accelerate is the answer, other people say safety is the answer. Well, we actually need to walk the narrow path where we want to avoid chaos, we want to avoid dystopia, which means the power that you're handing out into society is held either by oversighted, more centralized actors or bound with more responsibility by decentralized actors. So power in general being matched with responsibility. We've done this with, like, airplanes, Right? Like, chaos would be you hand everybody an airplane with no requirement for pilots training or pilots licenses, and the world would naturally look like plane crashes. And the other way is you have an FAA and a world where only elites get to use airplanes, and they get many advantages over everybody else. And we walk to the narrow path with airplanes. AI is a lot harder. It's a decentralized technology. But I think we need more principles in how we navigate it. And that's what the TED talk was about.
Naomi Ekparigan
Can you draw a parallel between the axes that you just described and social media and the way social media was rolled out?
Tristan Harris
Yeah, so in a way, we kind of get both parts of the problem with social media. So chaos is everybody gets maximum virality on their content. So we're unleashing the power of infinite reach. Like, you post something and it goes out to a million people instantly. And you don't have that power matched with credibility, responsibility, or fact checking. So you end up with this sort of misinformation. Information collapse is like the chaos attractor for social media. Sounds bad. The alternative people say, oh, no, no, then we have to have this sort of ministry of truth censorship, content moderation that is aggressively looking at the content of everyone's posts. And then there's no appealing process. And that's the dystopia for social media. Plus the fact that these companies are making crazy amounts of money and getting exponentially more powerful. And the power of society is not going up relative to Facebook or TikTok or whatever. So those are the chaos dystopia for social media. The narrow path is how do you design an information Environment in a social information environment, where, for example, instead of everybody getting infinite reach, you have reach that's more proportional to the amount of responsibility that you're holding. So that the power of reaching a lot of people should be matched with the responsibility that goes with reaching a lot of people. How do you enact that in ways that don't create dystopia themselves? Who's setting the rules of that? It's a whole other conversation, but I think it's setting out the principles by which you think about power and responsibility being loaded into society.
Naomi Ekparigan
Okay, I just wanted you to describe that because it translates to this moment in AI too, because it seems like we're so much farther down the road with social media, but still in the early few years of AGI, practically speaking.
Tristan Harris
We probably have two years till AGI.
Naomi Ekparigan
Yeah, that's what I was going to ask you. What is the timeline, what I hear?
Tristan Harris
And you know, we're based in Silicon Valley and this is generally not even private knowledge, but even when I hear it privately in settings in San Francisco, we're about two years from artificial general intelligence, which means basically this is what they believe that you would be able to swap in a human remote worker that's doing things and you swap in an AI system. That's probably not going to be true for fully complex tasks. There's some recent research out from a group called Meter that measures like how long of a task can an AI system do? So can they do a task that's 10 minute task? Can they do a task that's a 3 hour task? And what they found is that the length of a task that an AI system can do doubles every seven months. By 2030, they'll be able to do a month long task.
Naomi Ekparigan
Wow.
Tristan Harris
So that's like a task that you would to someone that would take them a whole month to do. And by 2030 we'll have an AI that you hand it to them and they'll do all that much faster.
Naomi Ekparigan
So given this timeline, what are you most worried about?
Tristan Harris
I think that with AI we have a crisis. It's kind of an adaptation crisis. It's a crisis of time. It's too much change over a small period of time.
Naomi Ekparigan
And regulation is always too slow.
Tristan Harris
The law always lags behind the speed of technology. That's always true. This will require an unprecedented level of clarity and how we want to respond to it. What I was trying to do in the TED talk was just to lay out enough clarity. And there's a point where I just say this is insane. If you're in China, if you're in France and you're building Mistral, if you're a mother of a family in Saudi Arabia who's invested in AI, like it doesn't matter who you are. If you are really facing the facts of the situation, it's not a good outcome for anybody. And the weird hope that I have is that if we can clarify the situation so much that people can feel and see what's at stake, something else might be able to happen. I'm really inspired by the film the Day After. Do you know the Day After? It was a film from 1982 about what would happen if the U.S. this is a.
Naomi Ekparigan
Why do you know the Day After?
Tristan Harris
It was actually two years before I was.
Naomi Ekparigan
Yeah, I was gonna say I watched.
Tristan Harris
It on YouTube actually, when I was in college. And it had a profound impact on me because I couldn't believe it actually happened. It was an event in world history where 82 or 83, it was like 7pm on primetime television. They aired a two hour long fictionalized movie about what would happen if the US and the Soviet Union had a nuclear war. And they just actually took you through kind of the step by step visceralization of that story.
Naomi Ekparigan
Yeah.
Tristan Harris
And it scared everybody. But it was not just scaring, it was more like, we all know this is a possibility. We have the drills, the rhetoric of nuclear war and escalation is going up. But even the war planners and Reagan's team said that the film really deeply affected them because before that it was just numbers on spreadsheets. And then it suddenly became real. And then the director, Nicholas Meyer, who's now someone I know, he said in many interviews in his biography that when Reagan and Gorbachev did the first arms control talks in Reykjavik, he said the film had a in setting up the conditions for those talks. And that when the Soviet Union saw the film several years later, Russian citizens were excited to learn that the people in the United States actually cared about this too. And so there actually is something that when we come together and we say there's something more sacred that's at stake. We all want our children to have a future. We all want this to continue. We love life. If we want to protect life, then we got to do something about AI.
Naomi Ekparigan
What is the something that you propose we do? What is the narrow path, practically speaking?
Tristan Harris
So maybe just quickly to break down the current logic, like, why are we doing what we're doing? If I'm one of the major AI labs, I currently believe this is inevitable. If I don't build it, someone worse will. If we win, we'll get utopia and it'll be our utopia and the other guys won't have it. So the default path is to race as fast as possible. Ironically, one of the reasons that they think that they should race is because they believe the other actors are not trustworthy with that power. But because they're racing, they have to take so many shortcuts that they themselves become a bad steward of that power. And everybody else reinforces that. And what that leads to is this sort of race to the cliff bad situation. If we can clarify, we're not all going to win if we race like this. We're going to have catastrophes that are not going to help us get to the world that we're all after. And everybody agrees that it's insane. Instead of racing to out compete, we can help coordinate the narrow path again. The narrow path is avoiding chaos, avoiding dystopia, and rolling out any technology in particular AI with foresight, discernment, and where powers match with responsibility. It starts with common knowledge about where those risks are. So for example, a lot of people don't even know that the AI models lie in the scheme when you tell them they're going to be shut down. Every single person building AI should know that. Have we done that? Have we even tried throwing millions of dollars at educating or creating those solutions? Like for example, GitHub. When you download the latest AI model, it could say as a requirement for downloading this AI model, you have to know about the most recent sort of.
Naomi Ekparigan
AI loss of control risk, Surgeon General's warning.
Tristan Harris
Yeah, or just like for you to download the power of AI, you have to be aware of all the ways that power is not really controllable. You can't be under some mistaken illusion. It's sort of like passing a medical test before getting the power of medicine to put someone under anesthesia and cut them open. That's just the basic principle. It's so simple. Power has to be matched with responsibility. I'm not saying that this is easy. This is an incredibly difficult challenge. I said in the talk, it's our ultimate test. It's our final invitation. But to be the most wise and mature versions of ourselves and to not be the sort of one marshmallow single instant gratification stick our hands in our ears and pretend the downsides don't exist. Species like we have to step into our wise technological maturity.
Naomi Ekparigan
Tristan Harris, I have some rapid fire questions for you that we ask everyone and you don't have to think about it too hard, because I know you've had to sort of be on for several days straight. All right, here we go. You're in the hot seat. What does innovation or a good idea look or feel like to you?
Tristan Harris
What does innovation or a good idea look like? That is a very deep question. Well, I'll just say briefly. I'm a technologist. I love technology. I use ChatGPT every day. I love AI, and I want people to know that because this is not about being against technology or against AI. I have always loved technology. It's still my motive for being and wanting that to be a positive force in the world. But I think we often associate that technology automatically means progress. When we invented Teflon nonstick pans, we thought, that's progress. But the coating in Teflon was for these pfas forever. Chemicals that literally don't break down in our environment. And then now if you go anywhere in the world and you open your mouth and you drink the rainwater, we get levels of PFAS that are above what the EPA recommends. And it's because these chemicals literally don't break down. That was not progress. That was actually giving us cancers and degrading our environment. Whether it's that or leaded gasoline, which we thought was a technology that would solve a problem with engine knocking, leaded gasoline ended up dropping the collective IQ of humanity by a billion points because lead in our environment stunts brain development. All that's to say innovation. You asked what is innovation? Innovation is honestly looking at what would constitute true progress is social media that makes us feel more lonely. Actual innovation is it progress. So what we want is humane technology that is aligned with and sustainable with the underlying fabric of whether it's the environment or our social life. We can have humane technology that's aligned with our. Our mental health, it's aligned with our societal health, it's aligned with our healthy information environment. But it has to be designed in a way explicitly to protect those things, rather than just sort of steamroll it and assume that the technology is progress.
Naomi Ekparigan
Good answer. All right, off the TED stage, what's a fun talent, skill, hobby, obsession that you have that you love so much that you could give a TED Talk on it?
Tristan Harris
I haven't done it in a while, but I used to love Argentine tango and I danced tango for 10 years.
Naomi Ekparigan
No idea.
Tristan Harris
Yeah, it's not something people would anticipate.
Naomi Ekparigan
Yeah, I thought you were going to say magic.
Tristan Harris
No, that's another one you're into Magic. That was the last time that we talked. No. I lived in Buenos Aires for four months and I learned to dance Argentine tango because of a woman that I really liked. I ended up dancing for 10 years. And it's a fascinating dance because it's very good for people who are into pattern matching. It tends to attract a lot of physicists and math people.
Naomi Ekparigan
I didn't know it was so mathematic.
Tristan Harris
Yeah. There's a weird pattern to the way that the dance works that somehow attracts those kinds of minds. But it's really fun and it's a great way to be embodied and to just feel a totally different kind of somatic intelligence. Yeah, yeah.
Naomi Ekparigan
Very cool. Had no idea. Truly blown away. All right, this can just be a quick list. What would constitute a perfect day for you?
Tristan Harris
Living in integrity with everything that I know and doing the most that I can.
Naomi Ekparigan
So high minded. Some people are just like coffee.
Tristan Harris
That's just my truth. I really do feel that way. I really feel like we should. We need to be showing up for this moment.
Naomi Ekparigan
Follow up. What are you most worried about and what's giving you hope?
Tristan Harris
Well, I don't worry per se, but I think I've already said too many things that will be on that side of the balance sheet. There's something that I said in the TED talk in terms of hope that I think is really important. And it was actually a mentor who pointed this out to me. If you believe that something bad is inevitable, can you think of solutions to that problem while you're holding that it's inevitable? You can't. It's almost like it puts these blinders on. And if you step out of the logic of it's inevitable and recognize the crucial difference between it's inevitable and this is really hard. And I don't see an easy path. Now stand from a new place. This looks hard and I don't see an easy path. And now look for solutions. Your mind has this whole new space of possibilities that opens up. And so I think one of the things that's really critical to have all of us be in more of a problem solving posture is to both recognize the problems and be clear eyed about them, but then to not fall into the sort of fatalism of inevitability which is a self fulfilling prophecy. What is the best step we can take from where we are and not try to filter or dilute the truth, but also stand from agency of what's the world we want to create?
Naomi Ekparigan
Yeah, because cynicism obviously leads to the fatalism that you've been talking about. Exactly what choice do we have but to be in a position of hope?
Tristan Harris
Exactly. I think that's the deepest kind of hope, is to choose to stand from that place, even if we don't know what the solution is yet. And there's something powerful about that.
Naomi Ekparigan
Love it. Last question. What's a small gratitude that you have in your life right now? A detail, a moment, anything like specific that you're really grateful for?
Tristan Harris
It's funny that you say that. Gratitude is actually a really central part of my life. And I think it's one of the simplest things that we can do is wake up or when you go to have any meal with anyone, just to express what you're grateful for before sitting down. Yeah, yeah.
Naomi Ekparigan
What's yours? Do you have anything that, what would you express before sitting down?
Tristan Harris
It's every moment, actually. I mean, honestly, there's just, there's beauty in every moment. And I feel like actually seeing the world this way. There's more sacredness to every moment because there's just more to appreciate.
Naomi Ekparigan
Tristan, thank you so much.
Tristan Harris
So good to be here with you.
Naomi Ekparigan
That was Tristan Harris in conversation with me, Elise Hume in 2025. You can check out Tristan's talk on the Ted Talks daily feed and@ted.com and that's it for today. TED Talks Daily is part of the TED Audio Collective. This episode was produced by Lucy Little, edited by Alejandra Salazar and fact checked by Julia Dick. This episode was recorded by Rich Amies and Dave Pullmer of Field Trip and mixed by Lucy Little. Production support from Daniela Ballarezo and Shuhan Hu. The TED Talks Daily team includes Martha Estefanos, Oliver Friedman, Brian Greene and Tansika Songmar Nivong. Additional support from Emma Tobner. I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feed. Thanks for listening.
Mary
Aging is a natural process, as we all know, and I for one don't mind embracing it. But I will tell you one part of aging that I don't care for. It's the symptoms that stem from changing hormones, especially as you get older to perimenopause and menopause. That's why we want to tell you about Happy Mammoth's Hormone Harmony. Happy Mammoth, the company that created Hormone Harmony, is dedicated to making women's lives easier. And that means using only science backed ingredients that have been proven to work for women. They make no compromise when it comes to quality and it shows for a limited time. You can get 15 off on your entire first order. @happy mammoth.com just use the code happyme at checkout.
Elise Hu
Are you still quoting 30 year old movies? Have you said cool beans in the past 90 days? Do you still think Discover isn't widely accepted? If this sounds like you, you're stuck in the past. Discover is accepted at 99% of places that take credit cards nationwide, and every time you make a purchase with your card, you automatically earn cash back. Welcome to the now it pays to Discover. Learn more@discover.com creditcard Based on the February 2024 Nielsen report, here's a wake up call. Right now, your liver is filtering everything from fast food to fancy drinks. But there's a game changer that 3.5 million people are talking about. It's called LiverMD. 80% of LiverMD users saw significant improvements in their liver test results, plus better energy digestion and less bloating. Backed by clinical research and trusted by health professionals, Physician Formulated Liver MD takes liver care to the next level with seven clinically studied ingredients at their clinically effective dosages. For real, powerful results, from happy hours to heavy meals and everyday environmental toxins, your liver's keeping score. Time to flip the script with LiverMD. Feel the difference in 90 days or it's free. Visit onemd.org and use code TEDTalks to save 15% on your first order.
Podcast Summary: TED Talks Daily – "Beyond the Talk: Tristan Harris and Elise Hu"
Release Date: May 1, 2025
In this compelling episode of TED Talks Daily, host Elise Hu engages in an in-depth conversation with renowned technology ethicist Tristan Harris. Building upon his 2017 warnings about the dangers of social media, Harris delves into the more pressing and nuanced threats posed by the rapid advancement of Artificial Intelligence (AI). The discussion navigates through the complexities of AI development, ethical considerations, potential societal impacts, and the urgent need for responsible stewardship of this transformative technology.
Elise Hu opens the dialogue by highlighting Harris's assertion that AI represents the most powerful and uncontrollable technology in history. She references his poignant statement from his TED talk:
“We are releasing the most powerful, most uncontrollable, most inscrutable technology in history and releasing it as fast as possible with the maximum incentive to cut corners on safety.”
– Tristan Harris [03:21]
Tristan Harris elaborates on this, emphasizing the recursive nature of AI development. Unlike other technological advancements where fields progress independently (e.g., rocketry vs. biotech), AI interlinks with virtually every domain, exponentially accelerating its growth:
“If you advance intelligence, it advances energy, rocketry, supply chains, nuclear weapons, biotechnology, all of it, including intelligence for artificial intelligence itself.”
– Tristan Harris [04:03]
He warns that this rapid, interconnected progression outpaces human intuition and societal readiness, creating a technological landscape that is both unprecedented and precarious.
Harris introduces the concept of an AI arms race, where nations and corporations vie for dominance in AI capabilities to boost economic and technological prowess. He uses a vivid metaphor to illustrate the potential scenario:
“Imagine there's a new country on the world stage, the nation of geniuses. It has a million Nobel prize-winning geniuses working 24/7 without the need for rest or compensation, operating at superhuman speed.”
– Tristan Harris [05:34]
This metaphor underscores the immense power concentrated within AI systems and the dangers of such power falling into the wrong hands, leading to a "lose-lose" situation where uncontrollable technologies wreak havoc.
Harris outlines two primary failure modes in AI development:
Chaos: Unregulated AI leads to widespread misuse, including cyber hacking, environmental flooding, deep fakes, frauds, and biohazards. The lack of oversight results in a society grappling with uncontrollable and destructive technologies.
“The end game of that is what we call chaos.”
– Tristan Harris [08:38]
Dystopia: Centralized control by a few powerful entities (governments or corporations) results in unprecedented concentrations of wealth and power, breeding distrust and societal breakdown.
“It's a different difficult outcome... there's a real breakdown in trust.”
– Tristan Harris [09:37]
Harris emphasizes that both scenarios are detrimental, highlighting the urgent need for a balanced approach that avoids these extremes.
To navigate away from chaos and dystopia, Harris advocates for a "narrow path" where the immense power of AI is matched with corresponding responsibility and oversight. Drawing parallels to the aviation industry, he illustrates how regulation can coexist with accessibility:
“It's like handing everybody an airplane with no requirement for pilots training would result in plane crashes. On the other hand, restricting access to elites creates inequality.”
– Tristan Harris [10:14]
He calls for the establishment of principles that ensure AI's power is distributed responsibly, either through centralized oversight or decentralized accountability.
Harris draws a parallel between the rollout of social media and the current trajectory of AI development. He critiques how social media platforms unleashed immense power without adequate responsibility, leading to misinformation and societal fragmentation:
“With social media, chaos is everyone getting maximum virality on their content without accountability, leading to misinformation.”
– Tristan Harris [10:56]
He warns against repeating the same mistakes with AI, advocating for proactive measures to embed responsibility within AI systems from the outset.
Addressing the timeline of AGI, Harris shares insights from industry insiders:
“We probably have two years till artificial general intelligence.”
– Tristan Harris [12:42]
He references research indicating that AI's ability to perform complex tasks is doubling approximately every seven months, projecting that by 2030, AI will handle month-long tasks that currently require significant human effort.
Harris proposes several actionable strategies to mitigate the risks associated with AI:
Education and Awareness: Mandatory education for anyone utilizing AI systems about their limitations and potential risks.
“It's like passing a medical test before getting the power of medicine.”
– Tristan Harris [17:48]
Designing Responsible AI: Ensuring that AI development is aligned with societal values and ethical standards, fostering "humane technology."
Coordinated Oversight: Establishing global frameworks to oversee AI development, preventing unilateral actions that could lead to catastrophic outcomes.
In a lighter segment of the conversation, Harris shares personal insights through rapid-fire questions:
Innovation Defined: Harris distinguishes true innovation from mere technological advancement by emphasizing sustainability and societal well-being.
“Actual innovation is progress. We want humane technology aligned with our mental and societal health.”
– Tristan Harris [20:20]
Personal Passion: Harris reveals his decade-long passion for Argentine tango, highlighting its mathematical intricacies and its appeal to pattern-oriented minds.
Hope and Agency: Despite the grim outlook, Harris maintains hope by advocating for proactive problem-solving and agency in shaping AI's future.
“The deepest kind of hope is to choose to stand from that place, even if we don't know what the solution is yet.”
– Tristan Harris [22:53]
Harris concludes with a poignant call to action, urging society to embrace a "wise technological maturity." He emphasizes that avoiding a self-fulfilling prophecy of doom requires collective acknowledgment of AI's risks and a unified effort to steer its development responsibly.
“We need to be the most wise and mature versions of ourselves and not be the sort of one marshmallow instant gratification stick.”
– Tristan Harris [17:48]
Elise Hu wraps up the episode by directing listeners to Tristan Harris's TED talk for a deeper exploration of these critical issues, underscoring the episode's central message: the future of AI hinges on our ability to balance its transformative potential with ethical responsibility.
AI's Unique Position: Unlike previous technologies, AI's recursive nature makes its trajectory and impact vastly different and more unpredictable.
Urgency of Regulation: The fast-paced development of AI necessitates swift and effective regulatory frameworks to prevent misuse and societal harm.
Balanced Approach Needed: Avoiding both chaos and dystopia requires a middle path where AI's capabilities are harnessed responsibly, ensuring benefits are maximized while risks are mitigated.
Collective Responsibility: Addressing AI's challenges is a shared obligation that transcends individual or national interests, calling for global collaboration and ethical stewardship.
Notable Quotes with Timestamps:
“We are releasing the most powerful, most uncontrollable, most inscrutable technology in history and releasing it as fast as possible with the maximum incentive to cut corners on safety.”
– Tristan Harris [03:21]
“Imagine there's a new country on the world stage, the nation of geniuses...”
– Tristan Harris [05:34]
“It's like handing everybody an airplane with no requirement for pilots training would result in plane crashes.”
– Tristan Harris [10:14]
“Actual innovation is progress. We want humane technology aligned with our mental and societal health.”
– Tristan Harris [20:20]
“The deepest kind of hope is to choose to stand from that place, even if we don't know what the solution is yet.”
– Tristan Harris [22:53]
This episode serves as a crucial reminder of the double-edged sword that is AI. While it holds immense promise for advancing human civilization, without conscientious regulation and ethical considerations, it poses existential risks that could undermine the very fabric of society. Tristan Harris and Elise Hu invite listeners to reflect deeply on these issues, advocating for a future where technology serves humanity, not the other way around.