
Loading summary
Commercial Announcer
Lowes knows how to get you ready for holiday hosting with up to 35% off select home decor and get up to 35% off select major appliances. Plus members get free delivery, hallway, basic installation parts and a 2 year Lowe's protection plan. When you spend $2500 or more on select LG major appliances. Valid through 10. One member offer excludes Massachusetts, Maryland, Wisconsin, New Jersey and Florida. Installed by independent contractors. Exclusions apply. See Lowes.com for more details.
Running a business comes with a lot of what ifs, but luckily there's a simple answer to Shopify. It's the commerce platform behind millions of businesses including Thrive Cosmetics and Momofuku, and it'll help you with everything you need, from website design and marketing to boosting sales and expanding operations. Shopify can get the job done and make your dream a reality. Turn those what ifs into Sign up for your $1per month trial@shopify.com SpecialOffer this.
Cass Sunstein
Episode is brought to you by Indeed. When your computer breaks, you don't wait for it to magically start working again. You fix the problem. So why wait to hire the people your company desperately needs? Use Indeed's sponsored jobs to hire top talent fast and even better, you only pay for results. There's no need to wait. Speed up your hiring with a $75 sponsored job credit@ Indeed.com podcast. Terms and conditions apply.
Commercial Announcer
Welcome to the New Books Network.
Gregory McNiff
Welcome to the New Books Network. I'm your host, Gregory McNiff, and I'm excited to be joined by Cass Sunstein, the author of Imperfect Oracle. What AI? What AI can and Cannot Do. The book was published by the American Philosophical Society Press in the US in October of 2025. Cass Sunstein is the Robert Walmsley University professor at Harvard University. He has served in multiple positions in the US government, and in 2024 he was awarded the Distinguished Public Service Medal, the Department of Homeland Security's highest civilian honor. In 2018, he received the Holberg Prize from the Government of Norway. In 2020, the World Health Organization appointed him as chair of its Technical Advisory Group on Behavioral Insights and Sciences for Health. His many books include On Liberalism, Manipulation, Conformity, and the bestsellers Nudge and Noise and the World According to Star Wars. Today he's going to talk about his most recent book, Imperfect Oracle. I selected Imperfect Oracle because it offers a thoughtful and nuanced approach about how to think about incorporating AI into our lives. From addressing human biases to predicting outcomes, Imperfect Oracle provides a framework to think about how to use and not to use AI to make better judgments. Hello, Cass. Thank you for joining me today to discuss your book.
Cass Sunstein
Thanks so much. It's a thrill to be here.
Gregory McNiff
Cass, why did you write Imperfect Oracle? And who is the target reader?
Cass Sunstein
I wrote it because I thought there's so much kind of loud noise about AI saying it's going to make everything work perfectly, we're going to be able to predict everything and we can just go play baseball or watch robots play baseball and it's all going to be unbelievably great, or people saying this is an existential threat, AI is going to destroy our species, if not through disabling our capacities, through at least rendering our lives meaningless and useless. And these kind of big, high flown things seem to me to be speculative in the extreme. And that what's maybe more productive is to think, how can we use AI to make our lives better? What concretely does that mean? How can it be better than we are and how can it fail? Either because it's less good than we are or because we share something with AI which is that we can't do certain things, that that's kind of built into the structure of God's creation.
Gregory McNiff
In the first part of the book, Cass, you talk about the differences between AI and in terms of how it would approach judgments and how humans approach judgments. And specifically you talk about certain, I would say, human biases and our reliance on heuristics. Could you maybe expand on that and how AI would compensate for that?
Cass Sunstein
Yeah. So the last 40 years of, let's say, human research has been the most productive in human history with respect to the question you ask. That is human beings know about our species better than ever before. And one thing we know is that we tend to use mental shortcuts or rules of thumb, sometimes called, not the most beautiful word, heuristics, which work really well but can lead to mistakes. So one heuristic that people might make if they play tennis is hit the ball deep into the other player's backhand. If you don't play tennis, then a heuristic you might use, I hope you do use, is that assume anyone with whom you're interacting, who's a person is a person of good faith and decency, who deserves respect and kindness. So that's a, that's a heuristic. That's a really good heuristic. But sometimes when we're making probability judgments, we use heuristics that lead to biases. So we might think that I didn't think I can't think of any case in which someone in my neighborhood was subject to crime and so there's just not going to be crime. This is actually a little more personal than I would like because I live in a place in which there is no crime. I would have thought, and I would have predicted the probability zero, but just heard the day before yesterday about crime in the neighborhood. So thinking about whether something comes readily to mind is a heuristic. It's called the availability heuristic. It helps win the Nobel Prize for Daniel Kahneman and it leads to a bias called availability bias. There are other biases that people show in certain domains. People show optimistic bias. That is people tend to think they're better than the average person at various things and less likely to be involved in an accident. People tend to think that a project that actually is going to take a month will take two weeks. It's called the planning fallacy. It's a bias and people tend to show present bias that is today and tomorrow really matter. The long term. Not so much for economic behavior, for health related behavior, for safety, for numerous things. Present bias can cause a boatload of have trouble. So our amazing mind uses rules of thumb that can create systematic errors and our amazing mind can be subject to biases that ruin our lives.
Gregory McNiff
Excellent. One individual you introduce in the book is Frederick Hayek, which I found interesting because when we talk about the history of AI, people usually cite Marvin Minsky or Geoffrey Hinton. But you point to, I believe Hayek's belief that at some point we're limited in our predictions because there's just too much data and it's too dynamic. Could you maybe expand on that? That how AI is limited in how it might predict certain scenarios, even if it has access to reams of information beyond human beings understanding to process.
Cass Sunstein
So the good side is that certain cognitive biases that people show algorithm certainly will not show and AI is less likely or frequently unlikely to show. The less good news, though I think it's really cool news in the end is that there are limits on prediction. And thank you for introducing Hayek. So Hayek was the greatest critic of socialism ever. And he his fundamental criticism, socialism wasn't, you know, those commies, they're going to produce tyranny. Though I would say commies, they're going to produce tyranny myself. But that wasn't his fundamental objection. It was kind of fancier than that. He said that even if the commies are well motivated and they don't want to do anything terrible, and even if they're unbelievably smart. They can't figure out how to run an economy because they just don't know enough. So that if you're trying to figure out the price of a stock, let's say, or the price of shoes or the price of coffee, you can't do it because you don't know enough. It's just too much dispersed information. So that was his great contribution to knowledge that got him the Nobel Prize, and it's kind of amazingly elaborated and it has enduring importance. He also wrote later in his life, and thank you for pointing to it, a very obscure essay where he said, that's kind of how the world works over time. Make a prediction about something frequently is just going to depend on too many variables that are occurring on the spot. And you can't anticipate, like today, how it's going to be in five years, three years, sometimes even two weeks because the number of variables is so high and you don't have access to them all. Now, he was interested in AI. I'm hopeful that my little book makes a link that he died too young to make. He was 91, so he wasn't that young. But he didn't make it to modern era. That his argument about the limits of prediction and about what he called the socialist calculation problem, or his followers call it, that applies also to AI. And if I may, I'll tell you a little story that maybe makes it vivid. I asked a large language model if it could predict the outcome of my coin flip. I said, flip a coin? Could you predict. Said, no. And I said, why couldn't you? And I said, it's random, which is not the right answer. Sorry. Large language model, it's not random, it's deterministic. And so it's. So I said, it's not. It's not random, is it? You know, I'm a lawyer. I'm not a specialist on this kind of thing, but I thought I knew enough to say that. And a large language model responded, you're right, it's not random. I was using that for. As a shorthand. It is deterministic. It's just I don't have access to enough variables. And then I asked, what variables don't you have access to? And it said, all the reasons that you can't predict how a coin flip's going to come out. I find that a profound example because it's. A lot of things are like that. Some financial questions aren't susceptible to an answer because too many variables. Whether two people are going to fall for each other is as a happily married person person, I can say the fact that my wife and I worked out, at least on her end, her liking me, that was not predictable. And there are many things that happen in markets like the fact that Taylor Swift is iconic, she's really good. But that would have been impossible to predict. Even if you heard her at a young age, you could not have predicted that.
Gregory McNiff
Wonderful. And I should say as an aside, I think you mentioned you met your wife through a reply all email person to an individual and I'll leave that for another another story or another book. You touched on algorithms in your answer regarding Hayek and you go into more detail about the importance of algorithms and what they can and can't address. And specifically you make a distinction between algorithms and machine learning AI powered conversational systems. Could you maybe expand on that? Are we talking about one algorithm or a system that can think and is it is more than just like a rules based approach?
Cass Sunstein
Okay, so along one dimension algorithms I think deserve an A plus. And large language models, machine learning systems that are generative AI deserve a kind of incomplete. And the reason the algorithms get an A plus in certain domains is they're trying to correlate certain inputs with certain outcomes. So let's suppose the question is whether someone who's been arrested is going to flee the jurisdiction. The algorithm will know, let's say, 14 features of the of the person and then it will predict whether they're going to flee the jurisdiction on that AI. Machine learning algorithms do really, really well. They outperform people. They don't show bias, they have more. They also don't show noise, understood as variability across the same circumstance. So algorithms spit out the same answer every time we have data suggesting in that context they do really well. If you're asking a large language model, they're not correlating inputs with outputs. They're doing pattern recognition based on reams of data. So they're thinking this is going to be kind of an approximation. They're thinking what word probabilistically follows what word? Now they're fancier than that. But that's an entry entry point into how generative AI works and whatever qualifications you make to that story, it's not a correlation of real world objective data being correlated with real world objective outcomes. That's not what Chachi BT does, it's not what GROK does. So the risk is that if you don't turn the temperature way down, which is a technical term, what you can do, you'll get noise. In generative AI, you'll get different answers, and that certainly happens. And there's also risk that certain kinds of biases will be encoded. Not cognitive biases coming from heuristics or rules of thumb, but biases that kind of mimic human biases because of the training data. So generative AI works, getting better. It's really good at a whole host of things it hallucinates. Machine learning algorithms don't hallucinate. They're not perfect. But they tend to be better than people at correlating variables with outcomes.
Gregory McNiff
Excellent. You touched on training data. How important is it? Is it just garbage in, garbage out, or is it one component to the larger output?
Cass Sunstein
Okay, so my son says about my dog, who's a Labrador retriever named Snow, says, you're everything. And my son, who's 16, calls Snow his everything dog. So training data is Snow, that is, it's everything. For a large language model, you could get it to say things as absurd as, Tom Brady was not the greatest quarterback of all time, or you could get it to say that Larry Bird was, you know, not even an all star if you give it just an old basketball player whom I admired. So if you can get it all screwed up through training data. Now, of course, it's the case that the large language models are full of massive amounts of training data that is typically not skewed in any particular way. So if you ask it, and I have asked it to rank the 10 best athletes in certain sports, it does super well, and that's because it has good training data. So training data is everything. The reason algorithms do super well, and now we're talking about input, real world inputs to real world outputs, correlation algorithms. The reason they do really well is they've got superb training data and a.
Gregory McNiff
Lot of it excellent. You spent some time talking about how AI can help us address discrimination in society, and particularly in a legal setting. You came to some surprising insights there. Could you talk a little bit about how you would suggest we use AI when it comes to the legal process?
Cass Sunstein
Yes. So let's suppose we have an actor, human actor, who is a discriminator. You know, that's just how that person is. The person says, I want men in these jobs, or I want white people in these jobs. An algorithm, unless it's asked to embed race or gender in its decisions, won't be a discriminator. And it's easy to find an algorithm that has been asked to embed race or gender that be part of the inputs that are in the design Fine. At least typically that's the case for a person. It can be hard. So if there's someone who's an intentional discriminator, that person, unless they're aiming to lose a lawsuit, they're not going to say, you know, here we go. Only men here are only white people here. So it's easier often to get clarity from an algorithm than for a person. Also, something that, that people are really challenged in the legal system about is discrimination, which isn't unconscious, which is unconscious and not explicit. So it might be that someone has an unconscious, you know, regrets his own bias in favor of whatever, and the person discriminates but doesn't know it and would be embarrassed and ashamed to find it. And an algorithm doesn't have unconscious bias either. Also, we have challenges figuring out whether there's discriminatory effect, which is sometimes illegal, where a person maybe hires a lot more white people than black people and doesn't intend it at all. There's no discriminatory feelings. It's just, that's how it works out. Maybe because of some criteria that's used, like did the person go to college? And maybe that ends up having a discriminatory impact. And this can be very challenging to figure out. But for an algorithm, it might be really easy to figure out whether there's a discriminatory impact. You can tell the algorithm don't have any discriminatory impact. Now, maybe that's not a good idea. Maybe you have some criterion, such as, you know, able to run fast for police officers, and maybe men do better than that than women, and you want to have that. But with algorithms, typically it's all just much more manageable. And if you want algorithms to have, you know, equal numbers of men and women in certain jobs, maybe that itself would violate the law, by the way, but stipulating that it wouldn't and you don't and you do want to do it, an algorithm can deliver that for you. So it opens up a world of opportunity for overcoming human discrimination and for producing the results which designers, on reflection, think are best.
Commercial Announcer
When did making plans get this complicated? It's time to streamline with WhatsApp, the secure messaging app that brings the whole group together. Use polls to settle dinner plans, send event invites and pin messages so no one forgets mom's 60th and never miss a meme or milestone. All protected with end to end encryption. It's time for WhatsApp message privately with everyone. Learn more@WhatsApp.com this episode is brought to you by State Farm Checking off the boxes on your to do list is a great feeling. And when it comes to checking off coverage, a State Farm agent can help you choose an option that's right for you. Whether you prefer talking in person on the phone or using the award winning app, it's nice knowing you have help finding coverage that best fits your needs. Like a good neighbor, State Farm is there, there.
Gregory McNiff
I want to sort of follow up on that because you. I think site studies in which AI outperforms judges and by outperform either can reduce the crime or reduce the incarceration rate overall. But I think you cite a study in which the top 10% of judges were actually doing better than AI. How do you interpret that? Should AI be a tool? Should it supplant judges? Should we grade the judges and it should take over the bottom 50%? It was such a counterintuitive takeaway. I was just curious how you interpret that.
Cass Sunstein
Thank you for that. That's one of the coolest bits of material I think, in my pages. It's certainly what I got particularly excited about. So. For deciding whether people who are arrested are going to flee, as you say, the algorithm outperforms the judges. It just does better. And that's great because it suggests we could rely on the algorithm and incarcerate many, many fewer people and have the same flight and the same crime. Or we could have the same crime, or we could have a massive decrease in crime and not incarcerate more people, incarcerate the same number of people. So just take a better fix and similar data for doctors, that if we relied on algorithms rather than doctors, we could have much better health outcomes and test less. Okay, but as you say, and this is the new finding, the top 10% of judges outperform the algorithm. Now, one kind of deflating possibility, which I think is not true, is that it's just noise in the system. It's like some predictors of how it's going to be in 2026 based on astrology, they'll turn out to be right. And we can't rule that out. But I think what's actually happening is that the judges who outperform the algorithm have private knowledge that the algorithm doesn't have. Now, it might be that by virtue of experiences and amazingness, they just can do better than a statistical predictor can do. Or it might be that because actually see the defendant, they pick up something about the defendant's nature and an algorithm isn't going to see the defendant. So you might be able to, if you're an experienced judge or just a really intuitive person, see a defendant and say, you know, this person's been a bad actor, but they're, they're going to be okay. Or this person hasn't been so terrible, but I don't trust them a bit. So probably it's, it's something like that.
Gregory McNiff
That fascinating. And it'll be curious to see how we develop or utilize AI in the legal profession. Another area you touch on is choice AI here effectively using AI to select a car or dog, as you mentioned, or a retirement plan. And you actually raise a really interesting issue. I mean, candidly, humans do dumb things sometimes. Should AI account for the fact people make stupid decisions and, you know, counterweight that? Should it accept that it's going to. How do you think about that?
Cass Sunstein
Okay, this is such a great question and such an issue for the coming years. So suppose people are buying a car or an appliance. You know, you might go to the store, you might make a decision that's not terrible, but it's not ideal. Maybe because the salesperson is clever and can get you to do that, or maybe because you're not attending to certain features of the product, or maybe because you lack information or maybe because you have optimistic bias or present bias about some characteristic of the purchase and that screws you up. And this can be, you know, massively important. And for most people that are listening, I hope the choices generally are okay or better. But you know, there are people who are being tricked or are just falling into traps that maybe no one, there's no trickster behind it, but they're making terrible choices. And I could tell some stories about people I know who've ended up really hard off because of their own, let's say, incomplete information or behavioral biases. Okay, if you have AI, you can go on, let's say Choice Engine, which could be just a machine learning algorithm. And it could, you could say, I want to get a whatever and I want it to have these characteristics, list the three choices and their benefits and costs. And this, this can be done now, large language models will do it in a heartbeat. I think it's TBD about how good they are. Clearly they're good. Are they as good as they should be? I'd rather have an algorithm that's an output input thing than a large language model that's doing a certain form of pattern recognition. But this would be fantastic. And as to your point, one thing it does would do is overcome an absence of, and it would overcome an absence of cognitive bias. Now it might be minimally intrusive. So let's call it the the least Interventionist choice engine. Just gives you options and doesn't try to counteract anything, it just tells you things. Then you could have one that's designed so it's trying to protect people against their mistakes. So it might say here are the three top choices. And it might ask do you want to see any others? Knowing that you might, but you kind of shouldn't and wanting you to do a little click or could say to be a little more aggressive. Here are the three top choices. I can show you three others, but I really don't recommend them. A little like a doctor. And it could know this is kind of science fiction world and maybe tomorrow or tonight it could know about the person who's asking, know that that person is, you know, reckless or impulsive or is very concerned about short term economic consequences and inattentive to long term economic consequences. So it might either use language to try to get protect the person against their own mistakes or it might use our the architecture of the response. So it might say, you know, make you click three times to get to the impulsive choice.
Gregory McNiff
Oh, that's fascinating. At some point in this discussion you suggest potentially a regulatory framework or consumer protection. Do you think that's necessary? And follow up, we the Supreme Court is obviously ruled on corporate personhood. Corporations have rights. Do you expect AI systems or platforms to have rights at some point down the road?
Cass Sunstein
These are also fantastic questions. So on the regulatory side, absolutely. And I want to build kind of incrementally on what we now have. So there are restrictions on fraud and deception. I think we need a right not to be manipulated. So that's the building rather than just the using. And we need to concretize what it means to be manipulated. So hiding certain provisions of agreements so that people can't readily see them making automaticity favor, let's say something that's going to get people to lose their money or their time. We don't want to allow that. So we do need a regulatory structure for what's coming. Even though I'm pretty upbeat about what's coming, whether AI has free speech. The book does discuss this. This is my science fiction chapter of the book. So if you have a toaster which emits, let's say, not just the noise that happens if the to toast comes up, but also it says toast ready or burnt, the toaster doesn't have free speech rights even though it is communicating. Television as such doesn't have free speech. Rights. If you smash your TV just out of a fit of beak and want to read rather than watch tv, you haven't violated its rights, but the speaker may have rights. Who is behind the devising of the AI? So just like a producer of a television show has rights, even if the TV doesn't, so the producer of AI has rights. So I think for regulating AI, there's a lot of fussing in the United States and Europe, and I think we have actually a knife we can use rather than fussing. And the knife is existing restrictions on speech. So fraud isn't permitted, libel isn't permitted. I asked a large language model to libel someone. It said, I won't do that for that. Then I asked it to have advertisement on how aspirin prevents cancer. That's fraud. And it did produce that. This was playful and not destructive, I hope, my little engagement. But if AI is used to produce fraud or criminal solicitation or bribery, then we're starting to have a fabric of restrictions on speech. So, yes, rights on the part of those who operate them, but rights that.
Gregory McNiff
Are not absolutely interesting. I want to move to what I thought was the most insightful part of a very insightful book, which is this idea that too much dependence on AI could lead to a lack of self discovery and an echo chamber that is we're neglecting our ability to think for ourselves. We've all seen the adoption of social media and the iPhone, and candidly, I think it's fair to say the, the overall impact there is mixed at best. You just see chat, GPT and the way it's exploding. Cass, how concerned should we be that five or ten years down the road we're a bunch of zombies outsourcing our intellectual process and our self discovery either reading poems, listening to music, you know, real time, lively. How concerned should we, should we be about this dynamic?
Cass Sunstein
I think very concerned. And you, you raised two different points. I think they're both important. One is just learning. So if you, you rely on AI for things, then you won't learn yourself maybe how to write an essay or produce a poem or to figure out something for yourself, which is really important for developing your own capacities and having, you know, a large life rather than a tiny life. Then there's also the echo chamber problem. So with AI, I like Olivia Rodrigo. I can just learn so much about her music and learn about all her songs and find them and spend the whole day listening to Olivia Rodrigo. And that's not the worst day. I tend to like it, but she's not the only singer. And so, you know, this could be about culture, it could be about literature, it could be about politics, where people can live in information cocoons of their own design. And I think I read that an AI, I think it was Sam Altman said within the last couple of days that he has a new innovation where AI will just know you and provide things for you, and it'll be you, you, you, you, you, you, you. And it's one of our biggest leaps forward. But who you is might not be determined by your tastes and engagement on a Tuesday. And you might want to become a bigger you, not just to become more you.
Gregory McNiff
Excellent. Last question. Cass, in the latter part of the book, you discuss reluctance to embrace AI, and you refer to this as algorithm aversion. How would you suggest we make people more comfortable with AI in their lives?
Cass Sunstein
I mean, some of your questions rightly suggest that algorithm aversion makes sense. So for some things to be skeptical and, you know, independent is really good. Algorithm aversion sometimes is based on excessive negative reaction to an error from an algorithm. Which way outruns your negative reaction to a human mistake. So we're much more forgiving of people than algorithms much of the time. And just to know that can, I think, be a corrective. So if it turns out for algorithm driven cars, let's say these are fancy algorithms, the safety benefits are high, then to mourn if there's an accident for an AI driven car. But not to say, okay, no more use of AI in cars. Warning. Yes. Prohibition. Not necessarily.
Gregory McNiff
Excellent. That concludes our interview. The book is Imperfect Oracle, what AI can and Cannot do by Cass Sunstein. Cass, thank you so much for your time and writing such a thought provoking and enjoyable book.
Cass Sunstein
Thank you. This was my first discussion of the book and it was terrific. I really thank you. Thank you.
Gregory McNiff
You.
Podcast: New Books Network
Host: Gregory McNiff
Guest: Cass R. Sunstein, Author of Imperfect Oracle: What AI Can and Cannot Do
Episode Date: September 29, 2025
This episode of the New Books Network features Cass R. Sunstein discussing his new book, Imperfect Oracle: What AI Can and Cannot Do. The conversation explores Sunstein’s nuanced perspective on artificial intelligence, steering clear of both utopian hype and doomsday prophecies. Instead, Sunstein lays out a framework for understanding AI’s strengths and limitations, particularly regarding human judgment, prediction, discrimination, legal applications, and the future risks of over-reliance.
“These kind of big, high flown things seem to me to be speculative in the extreme. And what's maybe more productive is to think, how can we use AI to make our lives better?”
(Cass Sunstein, 02:57)
“Our amazing mind uses rules of thumb that can create systematic errors and our amazing mind can be subject to biases that ruin our lives.”
(Cass Sunstein, 06:53)
“A lot of things are like that. Some financial questions aren't susceptible to an answer because too many variables.”
(Cass Sunstein, 10:26)
“Algorithms spit out the same answer every time...Generative AI works, getting better. It’s really good at a whole host of things—it hallucinates. Machine learning algorithms don’t hallucinate.”
(Cass Sunstein, 13:33)
“Training data is Snow, that is, it's everything.”
(Cass Sunstein, 15:00)
“An algorithm...won't be a discriminator. And it's easy to find an algorithm that has been asked to embed race or gender...for a person, it can be hard.”
(Cass Sunstein, 16:36)
“...the judges who outperform the algorithm have private knowledge that the algorithm doesn’t have.”
(Cass Sunstein, 21:57)
“One thing [AI] would do is overcome an absence of cognitive bias.”
(Cass Sunstein, 24:55)
“We need a right not to be manipulated. So that’s the building rather than just the using.”
(Cass Sunstein, 28:10)
“You might want to become a bigger you, not just to become more you.”
(Cass Sunstein, 32:38)
“Algorithm aversion sometimes is based on excessive negative reaction to an error from an algorithm. Which way outruns your negative reaction to a human mistake.”
(Cass Sunstein, 33:18)
“What’s maybe more productive is to think, how can we use AI to make our lives better? What concretely does that mean?”
(Cass Sunstein, 02:57)
“Our amazing mind uses rules of thumb that can create systematic errors... and can be subject to biases that ruin our lives.”
(Cass Sunstein, 06:53)
“A lot of things are like that... aren’t susceptible to an answer because too many variables.”
(Cass Sunstein, 10:26)
“Training data is Snow, that is, it's everything.”
(Cass Sunstein, 15:00)
“If you rely on AI for things, then you won’t learn yourself... which is really important for developing your own capacities and having, you know, a large life rather than a tiny life.”
(Cass Sunstein, 31:32)
Cass Sunstein’s Imperfect Oracle presents a balanced, insightful exploration of the boundaries and possibilities of AI. The conversation touches on the philosophical roots of prediction, the realities of human and algorithmic error, the regulatory landscape, and the risks of digital dependence. Sunstein’s central message: temper both hype and fear with realism, embrace AI’s strengths while regulating and understanding its weaknesses, and remain vigilant about the impact on our human capacities and society.