Transcript
Hannah Fry (0:00)
I'm Hannah Fry, and as we rely more and more on artificial intelligence in every facet of our lives and businesses, I'm on a mission to find out how we can build the Internet that AI needs. Learn more later in the podcast. Bloomberg Audio Studios Podcasts Radio.
Unknown/Interjection (0:31)
The.
Barry Ritholtz (0:41)
Algorithms are everywhere. They determine the price you pay for your Uber, what gets fed to you on Tik Tok and Instagram, and even the prices you pay in the supermarket. Is all of this algorithmic impact helping or harming people? To answer that question, let's bring in Cass Sunstein. He is the author of a new book, Algorithmic Protecting People in the Age of Artificial Intelligence, co written with Oren Bar Gill. Cass is also a professor at Harvard Law School and is perhaps best known for his books on Star wars and co authoring Nudge with Nobel laureate Dick Thaler. So, Cass, let's just jump right into this and start by defining what is algorithmic harm.
Cass Sunstein (1:35)
Okay, so let's use Star Wars. So let's say the Jedi Knights use algorithms and they give people things that fit with their tastes and interests and information. And people get. If they're interested in books on behavioral economics, that's what they get at a price that suits them. If they're interested in a book on Star wars, that's what they get at a price that suits them. The Sith, by contrast, take advantage with algorithms of the fact that some consumers lack information and some consumers suffer from behavioral biases. So we're going to focus on consumers first. If people don't know much, let's say, about healthcare products, an algorithm might know that. That they're likely not to know much and might say, we have a fantastic baldness cure for you. Here it goes. And people will be duped and exploited. Exploitation of absence of information. That's algorithmic harm. If people are super optimistic and they think that some new product is going to last forever when it tends to break on first usage, then the algorithm can know those are unrealistically optimistic people and exploit their behavioral bias.
Barry Ritholtz (2:46)
So I referenced a few obvious areas where algorithms are taking place. Uber pricing is one. The books you see on Amazon is algorithmically driven. Clearly, a lot of social media, for better or worse, is algorithmically driven. And even things like the sort of music you like on Pandora. What are some of the less obvious examples of how algorithms are affecting consumers and regular people every day?
Cass Sunstein (3:22)
Okay, so let's start with the straightforward ones, and then we'll get a little subtle. So straightforwardly, it might be that people are being asked to pay a price that suits their economic situation. So if you have a lot of money, the algorithm knows that maybe the price will be twice as much as it would be if you were less wealthy. That I think is basically okay. It leads to greater efficiency in the system. It's like rich people will pay more for the same product than poor people. And the algorithm is aware of that. So that's not that subtle, but it's important. Also not that subtle is targeting people based on what's known about their particular tastes and preferences. Let's put wealth to one side. And so it's known that certain people are super interested in dogs, other people are interested in cats, and there we go. And all that is very straightforward happening. If consumers are sophisticated and knowledgeable, that can be a great thing to make markets work better. If they aren't, it can be a terrible thing to make consumers get manipulated and hurt. Here's something a little more subtle. If an algorithm knows, for example, that you like Olivia Rodrigo, and I hope you do because she's really good, then there are going to be a lot of Olivia Rodrigo songs that are going to be put into your system. And let's say there no one's really like Olivia Rodrigo, but let's suppose there are others who are vaguely like her. And, and you're going to hear a lot of that. Now that might seem not like algorithmic harm, that might seem like a triumph of freedom in markets, but it might mean that people's tastes will calcify and we're going to get very balkanized culturally with respect to what people see in here. So they're going to be Olivia Rodrigo people and then they're going to be Led Zeppelin people and they're going to be Frank Sinatra people. And there was another singer called Bach, I guess I don't know much about him, but there's Bach and there would be Bach people, and that's culturally damaging and it's also damaging for the development of individual tastes and preferences.
