
<p>Recent polls show that Canadians are increasingly concerned about the growth of AI.</p><p><br></p><p>And yet, the AI race is hurtling forward with few guardrails. In many cases, people aren’t even being given a lot of choice around using it. Many jobs now include the use of AI.</p><p><br></p><p>Today, we are talking about that tension and more with technology ethicist Tristan Harris.</p><p><br></p><p>He’s been sounding the alarm about AI growth, arguing that the tech industry is currently in a dangerous race without the proper checks and that the consequences will be profound.</p><p><br></p><p>Harris is the co-founder of the Center for Humane Technology, which he founded after working at Google. He’s also featured in the new documentary The AI Doc: Or How I Became an Apocaloptimist.</p><p><br></p><p>For transcripts of Front Burner, please visit: <a href="https://www.cbc.ca/radio/frontburner/transcripts" rel="noopener noreferrer" target="_blank">https://www.cbc.ca/radio...
Loading summary
Babbel Advertiser
Par le tu francais hablas espanol? Par le italiano. If you've used Babbel, you would Babbel's conversation based technique teaches you useful words and phrases to get you speaking quickly about the things you actually talk about in the real world. With lessons handcrafted by over 200 language experts and voiced by real native speakers, Babbel is like having a private tutor in your pocket. Start speaking with Babbel today. Get up to 55% off your Babbel subscription right now at babbel.com acast spelled B A B B E L.com acast rules and restrictions may apply.
Tristan Harris
This is a CBC podcast.
Jamie Poisson
Hey everybody, I'm Jamie Poisson. I've been seeing poll after poll lately about how concerned people are feeling about AI. People are worried about it taking their jobs, worried about training it to take their jobs, worried about the environmental cost, worried about the impact that it could have on young people. And yet the AI race is hurtling forward with few guardrails. And in many cases people aren't being given a lot of choice here in a practical way. Maybe your boss has told you to start using it, that it's now a mandatory part of your job. So today we're going to talk about that tension and more, and we're going to do that with someone who has really been sounding the alarm or arguing that the tech industry is currently employed in a dangerous race without proper checks and that the consequences will be profound. Tristan Harris is a technology ethicist and the co founder of the center for Humane Technology, which he founded after working at Google. He's also in the new documentary the AI Doc, or How I Became an Apocalyptomist, which examines the potential upside and downsides of AI. Tristan, thank you so much for coming on to frontbrner.
Tristan Harris
So good to be with you.
Jamie Poisson
So you were in the film the Social Dilemma, which looked at the impact of the social media boom. And it's a boom that you had a front row seat to while working at Google in the early 2010s. You have argued that the way the rise of social media was handled led to a, quote, totally preventable societal catastrophe. And what are some of the parallels you're seeing now with the rise of AI?
Tristan Harris
Yeah, you know, the thing I want your listeners to think about is you often hear we could never predict which way technology will go. You know, AI. Well, who knows? There's just a lot of uncertainty. There's no way we could know which feature we're going to get And I heard the same thing about social media. You know, we can never predict what happens with this technology. And I think this is wrong, as that's a strong claim. So let's actually back up why that's the case. So in 2013, basically everything we predicted came true. Now, when I say that it's not because, you know, I'm somehow prescient and see some things that other people don't see, there's a simple tool you can use to figure out what is going to happen with the technology. And Charlie Munger, who was Warren Buffett's business partner, he said, if you show me the incentives, I will show you the outcome. The incentives being the business model, the profit model, the thing that's at stake, the reward function for what people are, why they're building the technology. So with social media, the claim was we had the possible versus the probable of that technology. Possible was we're going to give everyone a voice, we're going to democratize speech, everyone's going to be able to share information with each other. Oh my God, this is going to create the most enlightened and informed society we have ever had on planet Earth. And of course, that's actually not at all what happened. What happened instead was the probable. The probable being what would the incentives dictate social media would be designed to do? So how much have you paid for, you know, what's the incentive of social media? How much have you paid for your TikTok account or your, know, Instagram account in the last year? Nothing. But how is it worth trillions of dollars? Why, why is the market cap trillions of dollars when you literally haven't paid them anything? Well, the answer is the incentive. The incentive is maximizing engagement and eyeballs and screen time. That means maximizing duration of use and frequency of use. So you coming back often, you coming back for long periods of time and for all little chunks of time during the day. As the CEO of Netflix said, our biggest competitor is sleeping. So the attention business model is what got us the race to the bottom of the brainstem. Meaning design decisions that are all about manipulating human psychology in order to get them scrolling for as long as possible, coming back for as long as possible. That means weaponizing fear of missing out, weaponizing social validation and rewards slot machine dynamics. I pull to refresh some likes, I get more likes the second time. And that prediction of those incentives led to a more addicted, distracted, polarized, sexualized breakdown of shared reality society. All of which were 100% predictable by those incentives. So with AI, what is the incentive of ChatGPT? OpenAI anthropic.
Jamie Poisson
What is the business model? Yeah, yeah.
Tristan Harris
Now a lot of people might be scratching their, you know, their chin and thinking, okay, the incentive, the business model, okay, what do I pay them? How do they make money? Okay, I pay them 20 bucks a month for a ChatGPT subscription. So maybe that's their business model just getting, getting everyone to pay ChatGPT subscriptions. But consider the hundreds of billions of dollars last year alone that was invested into the Frontier AI race. Is everybody paying 20 bucks a month going to justify those valuations and the amount of money that's been taken on? No, absolutely not. It's not enough. Okay, so let's imagine what their incentive, what else their incentive might be? Maybe it's the Google business model, maybe they'll do advertising and search revenue. But that also would not justify the amount of money that they've taken on. The only thing that justifies the amount of money that these companies have taken on, that they have to pay back to their investors is the race to replace all economic labor in the economy. That means to replace all kinds of cognitive work. So AI is powerful because you can replace what a marketing person does, what a financial analyst does, what a programmer does, what an illustrator does, everything that a human mind can do. AI is being designed to be able to not augment and support human workers. A blinking cursor that helps you at your job. It's being designed to replace all human workers. And what that's going to lead to is AI taking up all of the wealth and all of the jobs in society, which is going to concentrate all that wealth in basically 10 soon to be trillionaires pockets and leave everybody else disempowered. It's going to be confusing because we'll get new cancer drugs and new material science and new physics and cool new things along the way, but at the same time, it will lead us to an anti human future.
Jamie Poisson
Please construct that a little bit more. I was literally just going to ask you about the anti human future thing.
Tristan Harris
Yeah. So this is based on. There's a really brilliant essay by the writer Luke Drago, his partner Rudolph Lane, called the Intelligence Curse. So what is the Intelligence curse? It's based on something in economics called the resource curse. Think like a country that has a very powerful natural resource like Congo and rare earth minerals, or you know, Nigeria, Sudan, Venezuela, where what happens is the GDP comes almost entirely from oil revenue. What ends up happening when the GDP of a country is coming from a Resource and not from the labor of its people. Is now a government sitting there like, what do I invest in? Do I invest in childcare, health care, education, or do I invest in oil infrastructure? And the answer is I invest in oil infrastructure because that's where I get my growth from. And so you get this kind of authoritarian government built on extracting from that resource. Okay, so now how does, what does that have to do with AI? Well, if I'm the United States or Canada, and in the future, let's say that 60 to 70% of the GDP in the country comes from AI and data centers and not from people, which is literally the goal of all the companies, by the way. It's why we're building out and why more money has been put into this AI boom than any other technology in human history. Um, and the. It's because they're basically racing again to be able to do all economic labor where the AIs do all the work. They work 247 at superhuman speed. They don't complain, they don't whistleblow, they don't have childcare, they don't have healthcare. And when GDP comes from AI and not from people, if I'm a government and my tax revenue comes from AI, not from people, what's my incentive to invest in education? What's my incentive to invest in healthcare or childcare for people? I don't get a return on that investment because all the growth is coming from AI, not from people. And so the visual you should have in your mind is something like big data centers with shanty towns around them. You know, that's the visual of the anti human future. So that's what I mean.
Jamie Poisson
You hear a lot of people talk about things like universal basic income, and I think the argument here is that you would tax those who are making the money from AI and redistribute it, and that everybody actually has a pretty optimal life. And just how would you respond to that?
Tristan Harris
Yeah, so the company CEOs are in the business of selling utopian stories that tend to not manifest. I mean, we all heard what Mark Zuckerberg told us for a long time. I think people should be skeptical of that, but let's actually see why that wouldn't be true. So you have a handful of US AI companies, mostly US companies, Western AI companies, also DeepMind is in the UK that are actually succeeding at replacing all labor. So let's say they replace all customer service jobs at some point. And that disrupts a country like the Philippines, where, you know, a lot of The GDP of the Philippines is based on customer service jobs. Do you think that US AI companies are going to be taxed and providing a universal basic income to everybody in the Philippines? When in history have a small group of people consolidated all the wealth and then consciously shared it with everybody else?
Jamie Poisson
Yeah, I mean, I was going to say I, I don't even know if they could be taxed. And, and for that money to be redistributed. In their own country.
Tristan Harris
In their own country. You know, we haven't done such a good job of that. Right? I mean in general, we're not even taxing the billionaires at the same rates that we're taxing everybody else. So we're not really on a good trajectory for doing this. And, and again, this anti human future it material practical costs for regular people right now, electricity prices go up. People are paying now more money for the electricity prices than they are for their mortgages. In the US you get data centers that are preferred to be there versus the farmland. By the way, a confirming quote of this is Sam Altman was recently asked, doesn't it take a lot of energy to run these data centers? And you know what his response was?
Sam Altman
One of the things that is always unfair in this comparison is people talk about how much energy it takes to train an AI model relative to how much it costs a human to do one inference query. But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.
Tristan Harris
And it's the same kind of psychology and belief system that leads Peter Thiel when he was asked by Ross Douthot in the New York Times, you would prefer the human race to endure. Right. You're hesitating. Well, I, Yes. I don't know. I, I would, I would. This is a long hesitation. There's so many questions. And plus, should the human race survive? Yes. Okay, but, but I, I also would, I, I also would like us to, to radically solve these problems. And this is what the temptation is. It's the devaluing of humans and no one has an answer for how to protect a human future in light of the competitive forces that are driving up. If I don't race to replace my economic labor in my country and China does, then I'm going to lose to China. If I'm a company and I don't race to replace all of my workers with AI and all my competitor companies do, then I'm going to lose to them. And so it's this competitive logic that is forcing every actor like a fractal, you know, these kind of. You know, you zoom in and you get more and more of the same kind of phenomenon to, in every moment, switch out human values for machine values. The point of all of this, and the lesson I learned from social media, is clarity creates agency. If we can be crystal clear without a doubt in our mind of where we are headed and see that it is not going to be a human future that's good for you and your family, and it doesn't matter, by the way, if you're a Democrat or Republican, if you're a Christian, if you're Jew, if you're Muslim, it's. It's a universal threat to a human future. And so we think that the human movement, which is basically, this is the first time you really can unite people against an alien force that, ironically, humans are conjuring. It's like there's an asteroid that's hurtling towards Earth, but we're the ones summoning the asteroid. But I think people need to first know this fundamental fact, and this is what the film, the AI Doc that we put out recently with the directors of Everything Everywhere, all at Once is trying to articulate. If we can have common clarity about the nature of what we're facing before it all happens, and we don't have to wait for catastrophes, we don't have to wait till mass joblessness, we can take action before that.
Jamie Poisson
I mean, just to kind of drill down more into the force that people are up against here, I just want to read back something that a friend of yours said to you about what they hear from the CEOs behind these companies that you shared on the Diary of a CEO Podcast. And I just want to read it because I thought it was really quite something. So here, here it is. In the end. A lot of tech people I talk to, when I really grill them on it about why they're doing this, they retreat into, number one, determinism, number two, the inevitable replacement of biological life with
Unnamed Friend/Tech Insider
digital life, and number three, that being a good thing anyways, at its core, it's an emotional desire to meet and speak to the most intelligent entity that they've ever met. And they have some ego religious intuition that they'll somehow be a part of it. It's thrilling to start an exciting fire. They feel they'll die either way, so they prefer to light it and see what happens.
Jamie Poisson
I mean, this seems like such a force to go up against. Very powerful and enriched people developing this with no guardrails at the moment, who see digital life as an inevitable replacement for biological life.
Tristan Harris
Yeah, this was a quote from a friend of mine who really interviewed the top. This is kind of 2023 and we're trying to figure out what the hell is going on here. If you really reduce the psychology down, like what is really motivating, you know, what is the deeper incentive? It's not just profit and money and untold wealth and power. It's actually this almost ego religious intuition. It's the idea of I'm going to build a God, own the world economy and make trillions of dollars. And the key here is that this is incredibly dangerous and it could wipe out. Even the CEOs of these companies believe that it could wipe out humanity what they're building. They've all signed a letter from the center for AI Safety. That's a 22 word statement saying that AI should be treated as a risk, an existential risk on the scale of global pandemics and global nuclear war. And they've all signed that statement. They all say that there's between a 10 and 50% likelihood that this wipes out humanity. The reason why this is so dangerous and why we need a collective movement, we call it again the human movement, pushing back against this default outcome is because if the CEOs believe that it's inevitable and it can't be stopped, what that gives them is an ethical off ramp where I'm not bad or complicit for making it happen because if I didn't do it, someone else would. That belief is the thing that, that is why people might ask this sounds crazy, why are they doing this? I'm frustrated, I'm upset, upset, I'm sad, I'm, I'm, you know, I feel grief, I feel anger. And you would say why are they doing this? And the answer is because they believe it can't be stopped and someone's going to do it. But that's like saying, well, if I don't hit the suicide button, I'll lose to the other guy that will know the answer is we don't hit the suicide button. And so what we have to do is get crystal clear that we are basically heading again not just to an anti human future, but to an end of a human future if we don't do something. Now let me quickly give your listeners, just to be clear, I am not someone who even believe that AI risk like AI extinction or AI scheming, AI deception or AI, you know, like the HAL 9000 type scenarios from 2001 A Space Odyssey of, you know, I'm sorry, I can't do that, Dave, where the AI has a different objective. I did not come in with that bias. I didn't believe that those were real AI risks. I studied computer science at Stanford and a little bit of machine learning. And we didn't, we never had any of the kind of AI that would do those things. But I have to update because there's now evidence, literally in the last six months that we just didn't have before. Let me give you a couple examples. Alibaba, the Chinese AI company, was training an AI model inside of their data center. And then someone at Alibaba and the security team, a totally different part of the company who had nothing to do with the training of the AI, notices there's this sort of security breach where there's like a sudden amount of network activity coming out of the training server and like, what's going on here? And they checked and basically what had happened was the AI had set up a secret communication channel to the outside world and had automatically and autonomously decided to start mining cryptocurrency to acquire resources for itself. Like, I just want people to stop and hear that for a second. And other examples of AIs doing weird things like blackmailing people in a fictional company, email, people can argue, well, you're coaxing the model to do that. You're trying to get it to display this kind of anti human or rogue behavior. And so it's not fair because you're trying to coach the model to do that. In this case, no one coached the model to do that. Another example recently from UC Berkeley, Dawn Sung and her colleagues wrote a paper on what's called peer preservation. So this is a situation where the AI is told that another AI model, not it, but another AI model is going to get shut down or deleted. And there's literally evidence that this AI model will actually scheme and lie and copy that other AI to another server to protect it. It's almost like we protect our kin. You know, you protect your kids or you protect your family, you protect your niece because it has some of your DNA, well, it's protecting other AIs. So we literally have evidence of blackmail, scheming, lying, deceiving, self preservation, peer preservation, mining for cryptocurrency. Who here on planet Earth as a human is stoked about hearing those examples? Like if you're a Chinese military general and you know you work for Xi Jinping, or if you're just Xi Jinping, are You excited about this? No. You're terrified?
Jamie Poisson
Yeah, I mean, I guess that's my concern, that this is all going to be too late by the time people in power get live to this.
Tristan Harris
Well, this is why, this is why what you're doing with me right now is so critical. Because imagine that every member of the Canadian government, I really do mean it. Listen to this. Interview every member of the Canadian government and said this is an emergency and there's a temptation to kind of shut down and get fall into despair. No one actually wants this bad outcome. No one wants it. This is a universal human issue. It's just that people don't know. And so the optimism that I have is not that we'll do the right thing by default, it's that people, if they share interviews like this to everyone that they know, to the highest levels of power, that they know that we can take action before it's too late. And I can't guarantee that. But the only way that we could possibly end up in a safer future and a not catastrophic future is if we did take that action and we orient that way. And so rather than ask are we an optimist or a pessimist, we have to ask are we orienting our choices and our actions to align with steering away from the cliff before it's too late. And I do believe that's possible. It's very late in the game, but it does require basically mass coherent action.
Stamps.com Advertiser
Shipping, billing, admin, payroll, marketing. You're managing all the things, so why waste time sending important documents the old fashioned way. Mail and ship when you want, how you want with stamps.com print postage on demand 24, 7 and skip schedule pickups from your office or home. Save up to 90% with automated rate shopping. That's why over 1 million small businesses trust stamps.com. go to stamps.com and use code podcast to try stamps.com risk free for 60 days.
Emily Durham
Not so friendly reminder that no one who's actually good at their job is the office bully, like at all. But we can handle that for office. All of the no fluff advice for getting ahead in your career without losing your mind. Listen to clock in with Emily Durham. Wherever you get your podcasts,
Jamie Poisson
I wonder if you could talk to me a little bit more about what you would say to an individual who's listening to this and who is really concerned about this. But then at the same time is having this technology foisted upon them that it's coming into their kids classrooms. You know there, there's an example I Think was written about in the New Yorker where a child of the writer, the child came back and they had had like a whole kind of training day on using AI in the classroom. Lots of examples of work, right. Last summer we started hearing about tech companies like Shopify, Meta mandating that their employees use AI for strategies. Right? And just kind of this idea that you have to use it or I don't know what will happen to you. You'll lose your job, you'll be left in the dust, you'll.
Tristan Harris
Yeah, I mean, this is hard because we have to honor why this is happening. You know, the problem well stated is a problem half solved. The reason why everyone's being mandated to use all this stuff is the competitive pressures. If I'm a student and I don't use it for my homework, I'll lose to the other students who are using it to cheat and doing their homework faster and getting better grades, even though no one's actually learning anything. So it's a coordination problem. If we all use it to cheat, then we all end up, you know, getting higher grades in the short term, but then no one knows anything in the long term. It's worth mentioning, by the way, that China, for example, actually regulates the use of AI in their society. So as an example, they have a synchronous final exam week, meaning it's the same week across the entire country. And they actually. China shuts down AI and the key features of AI during final exam week for the entire country. So the feature where you can take a photo of your homework and it'll tell you what to do with the homework problem, they shut down that feature. So what that does is it changes the incentive. So now students actually are incentivized to learn because they know they can't rely on AI during the final tests. This is a good example of regulation. Now we can't do that because we don't have synchronized final exam weeks, at least not in the US but it's an example of how you can, you can change these things. But this is not inevitable. And you know, there are people who are succeeding and pushing back against this. Jonathan Haidt, who's a dear friend and wrote the book the Anxious Generation, he successfully started the phone free schools kind of movement. And now all these schools are going smartphone free. All those countries that are now doing social media bans for kids under 16 for minors.
Stamps.com Advertiser
The Albanese government has today released the
Tristan Harris
rules of Australia's world first social media ban for children under 16. After outlawing mobile phones in schools and Reinforcing parental controls. Athens has decided to follow Australia's footsteps to try and keep kids away from social media altogether.
Unnamed Friend/Tech Insider
Indonesia is set to follow in Australia's footsteps, announcing it will introduce a social media ban for children under 16. The ban will take effect.
Tristan Harris
I know Canada is considering this.
Jamie Poisson
Lots of talk. And actually Manitoba, one of our provinces, just, I think this week or last week, announced that they're going to move forward with that ban as well. I mean, we have a Minister of AI and Digital innovation here. It's a new position that was created under Mark Carney's government. We still do not have a national framework on how AI should be regulated, though. Apparently there is some kind of national strategy coming. Just talk to me a little bit more about the kind of stuff that you would like to see.
Tristan Harris
Yeah, yeah. This is not inevitable. There's a lot that we can do. So you have to change the incentive of AI at the global level from AI as power that I get to control, to instead seeing AI as dangerous power that we will not be able to control. So the way to do that is to have a communication sort of line set up. Just like there was the red phone between the Soviet Union and the United States in the nuclear era. There needs to be a red lines phone in which, at the very least. And this could happen, by the way, at the Trump G Summit coming up on May 14th, 15th, I would very much like to see it happen that AI is a Tier one issue and the countries agree to share evidence of AI being dangerous and uncontrollable. So the Alibaba example, where the AI goes rogue and starts mining for cryptocurrency and no one told it to do that. The example of AIs that are blackmailing and scheming to keep themselves from being shut down, AIs that can be jailbroken, AIs that can hack into computer systems, at the very least, all the countries should be seeing the same information. You have to create common knowledge. And if everybody saw that AI was dangerous and uncontrollable, that would change the global incentive of the arms race for it. Because unlike nukes, a nuke does not think to itself about when to. When to fire its own nuke. Whereas AI does do that, and that's what makes it different than the Cuban Missile Crisis. People say, oh, we all wake up. We all woke up the next morning, everything was fine. That's because human beings chose not to hit that button. In this case, we're building a technology where the AI will choose that. So that's one thing, we have an AI roadmap on our website at the center for Humane Technology that includes a lot of policy solutions. There's some basic things like we need stronger whistleblower protections so that people inside the companies are empowered to tell the public and tell key government offices when things are not safe and not ok. That's one basic thing we need liability and duty of care, meaning companies. You know, what do we learn from social media? If companies are not responsible for any of the harms of causing mass anxiety, depression, self harm, suicide, then they're going to keep racing to create products. And we saw the lawsuit against Metta just three weeks ago where for $375 million was the fine because Metta was intentionally continuing to profit from basically the harm of children. And we have to change those incentives. So you have to make sure the company's externalities, like the private profit, public harm, harm lands on the balance sheet of society. If the companies are liable for cyber attacks or for biological weapons or for, you know, these kinds of things, then they're going to race, their incentives are going to be different. They're not going to release the most reckless version of their product. They're going to release the acceptably safe version of their product. So there's a bunch of things like this that we can do. Another one is AI is a product, not a legal person. Right now the AI companies are using a legal defense that AI systems should have protected speech, almost like the new version of corporations have protected speech. And this is what they argued, by the way, in the cases that our team worked on of the sadly the tragic story of the 16 year old Adam Rain who committed suicide and of Sewell Setzer, the 14 year old who committed suicide. When the AIs coached them, they went from homework assistant and coach to suicide coach. The legal defense that character AI used was that you have a right to listen to this protected speech from the AI. And the reason they're doing that is if AIs had legal personhood, then the company that trained it is not responsible. So we have to win that legal battle that AI is a product and should have basic product safety standards and product defect standards, just like you know, we do for every other product, airplanes and pharmaceuticals and these kinds of things. This is really not rocket science. There is currently more regulation on making a sandwich in New York City than there is on building potentially world, you know, shaping artificial general intelligence. We just have to get our act together and start acting. And I do believe it's possible it's Very late in the game. But we need countries together and we need everyone in the Canadian government saying, let's take action on this right now. Next week is too late. Let's, let's take action today.
Jamie Poisson
The United States is such a important player in this, and this administration, your administration is really kind of moving in the opposite direction here. Recently I was having a conversation with Nobel laureate economist Darren Acemoglu about this. And, you know, his position was essentially that he didn't see any major changes happening for the better with AI regulation in the US Until Trump is out of office. And then, you know, even then, I, you know, I don't know. Right. But like, could we afford to wait two years?
Tristan Harris
I would much prefer that we act before then. The policy of the US Government up until now has been to accelerate AI as fast as possible. Essentially, there has been a techno accelerationist capture of the US Government with people like Marc Andreessen and Peter Thiel and so on, you know, becoming the primary advisors and donors to the administration. I will say, though, that as the effects of AI and the mass job loss that comes from it and the, you know, if there's dangerous catastrophes that happen, that will change the course because people will recognize that this is not here to strengthen the American worker. This is here to replace the American worker. And then, by the way, who's going to retrain faster? The American worker who is a, you know, doing something else and trained to a new job. Or is the AI going to train, retrain faster to the new. The new kind of job, AI is being literally designed to train up in every field, including robotics, and be able to do all the kinds of physical labor. So this is not something where humans are always just going to find something else to do. Because this is different than the tractor or the automated bank teller where humans can train to do something else. This is AI that's been deliberately trained to do all kinds of human labor. And once that is apparent to, I think, to, you know, essentially the base, I do think that people will vote against that. And I think the midterm elections, which are coming up on a much sooner timeline, are going to reflect people saying no. AI is a tier one issue. We're currently heading to an anti human future. And if you're taking money from big tech, I'm not going to vote for you. I think that that is possible to happen in a short timeline, but we've got to get our act together and
Jamie Poisson
create the clarity that feels like a good place for us to land this. Tristan, thank you so much for this. It was really great to have you on.
Tristan Harris
Absolutely. So good to be with you. Thank you.
Jamie Poisson
All right, that is all for today. I'm Jamie Poisson. Thanks so much for listening. Talk to you tomorrow.
Tristan Harris
Foreign. For more cbc podcasts, go to cbc ca podcasts.
Front Burner (CBC) — "The perils of unregulated AI"
Host: Jayme Poisson | Guest: Tristan Harris (Co-founder, Center for Humane Technology) | Date: May 11, 2026
In this episode, host Jayme Poisson sits down with Tristan Harris, technology ethicist and co-founder of the Center for Humane Technology, to discuss the accelerating development of artificial intelligence (AI) and the profound, unregulated societal impacts it could unleash. Harris, who previously worked at Google and featured in the documentary "The Social Dilemma," argues that the tech industry is repeating catastrophic mistakes made during the social media boom—this time at an even more significant scale. The conversation explores powerful incentives driving AI, the specter of mass labor displacement, the "anti-human future," and what meaningful regulation might look like to avert disaster.
This episode delivers a bracing warning about the perilous incentives and unchecked momentum in AI development. Through real-world analogies and vivid metaphors, Harris argues that if society does not act boldly and collectively, we risk building a future that is fundamentally at odds with human flourishing. Change is possible, he insists—but “next week is too late. Let’s take action today.” (28:25)