Transcript
A (0:02)
AI can learn our tastes, our fears, our biases, and use that knowledge to steer what we buy, what we believe, even how we feel. Sometimes that's helpful, but sometimes it's dangerous. So where's the line? And how do we protect free will in a world where we may be manipulated without even realizing it? Hi, everyone, I'm Lynn Thoman, and this is three Takeaways. On three Takeaways, I talk with some of the world's best thinkers, business leaders, writers, politicians, newsmakers, and scientists. Each episode ends with three key takeaways to help us understand the world, and maybe even ourselves, a little better. Today, I'm excited to be with Cass Sunstein. Cass is one of the world's most influential leading scholars, as well as a leading thinker on behavioral science and how policies and laws shape human behavior. He served in the Obama administration as administrator of the White House Office of Information and Regulatory affairs, and he's advised governments around the world on regulation, law, and behavioral science. Cass has written dozens of books, including Nudge, co authored with Nobel laureate Richard Thaler, which transformed how we think about decision making and public policy. His latest book, manipulation, explores how our choices can be quietly shaped increasingly by artificial intelligence that learns more about us than we realize. Cass, welcome back to three takeaways. It is always a pleasure to be with you.
B (1:54)
Thank you. A great pleasure to be with you.
A (1:57)
In your book Manipulation, you write that dystopias of the future include two kinds of human slavery, one built on fear of pain, the other on the appeal of pleasure. Let's start with fear. How can AI undermine free will through fear?
B (2:16)
It can make you really scared. AI can that things are going to be terrible unless you hand over your money or your time. So AI might make you think, think that your economic situation is dire and you need something. Or it might think that your health is at risk and you need to change your behavior. It might make you think that things are unsafe. Now, if the situation is dire or unsafe, it's kind of good to know that. But AI can manipulate you into thinking things are worse than they actually are.
A (2:47)
And what could a dystopia of pleasure look like?
B (2:51)
Dystopia of pleasure sounds a little like an oxymoron. So if we're delighted and smiling and everything's going great, that sounds pretty good. But if it's the case that people are being diverted, let's say, from things that are meaningful to a world of videos that are producing smiles or smirks, it may be that your meaning in your life has been atrophied and what you're doing now is staring at things in a way that is making your life kind of useless and a little purposeless.
