Transcript
A (0:00)
If you're worried about immigration taking jobs, you should be way more worried about AI because it's like a flood of millions of new digital immigrants that are Nobel Prize level capability, work at superhuman speed, and will work for less than minimum wage. I mean, we're heading for so much transformative change faster than our society is currently prepared to deal with it. And there's a different conversation happening publicly than the one that the AI companies are having privately about which world we're heading to, which is a future that people don't want, but we didn't consent to have six people make that decision on behalf of 8 billion people. Tristan Harris is one of the world's most influential technology ethicists who created the center for Humane Technology after correctly predicting the dangers social media would have on our society.
B (0:39)
And now he's warning us about the.
A (0:41)
Catastrophic consequences AI will have on all of us. Let me, like, collect myself for a second. We can't let it happen. We cannot let these companies race to build a super intelligent digital God, own the world economy, and have military advantage because of the belief that if I don't build it first, I'll lose to the other guy and then I will be forever a slave to their future. And they feel they'll die either way, so they prefer to light the fire and see what happens. It's winner takes all. But as we're racing, we're landing in a world of unvetted therapists, rising energy prices, and major security risks. I mean, we have evidence where if an AI model reading a company's email finds out it's about to get replaced with another AI model. And then it also reads in the company email that one executive is having an affair with an employee. The AI will independently blackmail that executive in order to keep itself alive. That's crazy. But what do you think?
B (1:33)
I'm finding it really hard to be hopeful. I'm gonna be honest, Tristan. So I really wanna get practical and specific about what we can do about this.
A (1:38)
Listen, I'm not naive. This is super fucking hard. But we have done hard things before and it's possible to choose a different future. So.
B (1:49)
Just give me 30 seconds of your time. Two things I wanted to say. The first thing is a huge thank you for listening and tuning into the show week after week means the world to all of us. And this really is a dream that we absolutely never had and couldn't have imagined getting to this place. But secondly, it's a dream where we feel like we're only just getting started. And if you enjoy what we do here. Please join the 24% of people that listen to this podcast regularly and follow us on this app. Here's a promise I'm going to make to you. I'm going to do everything in my power to make this show as good as I can to now and into the future. We're going to deliver the guests that you want me to speak to, and we're going to continue to keep doing all of the things you love about this show. Thank you, Tristan. I think my first question, and maybe the most important question is we're going to talk about artificial intelligence and technology broadly today, but who are you in relation to this subject matter?
