
Loading summary
Podcast Advertiser
Springtime is my catalyst to switch out the major players in my closet and take stock of what I have and haven't been wearing over the last year. It's a great time to get a bit more intentional about what you're wearing day to day, and if I'm getting rid of anything, I want to make sure that I'm replacing it with quality pieces and I've been turning to Quince for that so often recently. Their clothes are made really well and price even better, so it makes shopping for and wearing their pieces simple. Quince uses premium materials like organic cotton and ultra soft denim and their lightweight linen pants, dresses and tops start at just $30. They've also got incredible accessories. I just picked up a cognac Italian leather sling bag which is a huge upgrade from the crossbody that I have been using. The leather itself is really beautiful and I also love the gold hardware it comes with. I think it's just such a sleek bag. Everything at Quint's is priced 50 to 80% less than similar brands because they work directly with ethical factories and out the middlemen. So you're paying for the quality and craftsmanship of the products, but not a brand markup. Refresh your everyday with luxury you'll actually use. Head to quints.com intelligence for free shipping on your order and 365 day returns. Now available in Canada too. That's Q-U-I-N c e.com intelligence for free shipping and 365 day returns. Quince.com intelligence insurance isn't one size fits all, and shopping for it shouldn't feel like squeezing into something that just doesn't fit. That's why drivers have enjoyed Progressive's name your price tool for years. With the name your price tool, you tell them what you want to pay and they show you options that fit your budget enough. Hunting for discounts, trying to calculate rates and tinkering with coverages. Maybe you're picking out your very first policy, or maybe you're just looking for something that works better for you and your family. Either way, they make it simple to see your options. No guesswork, no surprises. Ready to see how easy and fun shopping for car insurance can be? Visit progressive.com and give the name your price tool a try. Take the stress out of shopping and find coverage that fits your life on your terms. Progressive Casualty Insurance Company and Affiliates Price and coverage match limited by state law
Mia Sorrenti
welcome to Intelligence Squared, where great minds meet. I'm producer Mia Sorrenti. AI models now advise on everything from war Crop output and marriages. Algorithms determine whether we can get a loan, a job, an apartment, or an organ transplant. Carissa Valise, associate professor at the Institute for Ethics and AI at the University of Oxford, argues that today's computer scientists now play the same role as the oracles of the ancient world and the astrologers of the Middle Ages. When we cede ground to these predictions, we lose control of our own lives. On today's episode, Valise sits down with Tom Chatfield, technology philosopher and author, to discuss her new book, Prophecy, Prediction, Power and the Fight for the Future, and how predictive systems are transforming power, responsibility and human agency today. Let's join our host, Tom Chatfield, now with more.
Tom Chatfield
Carissa, it's a great pleasure to talk to you and to start with a very kind of simple question. Why did you decide to write a book about. About prophecy?
Carissa Valise
It's great to be here. Thank you so much, Tom. I found prophecy very interesting partly because I'm a philosopher and I'm interested in ancient Greece and there is this thread that runs through the history of humanity of how much we rely on prediction and how important it is. And realizing that when you do a search for the ethics of prediction, pretty much nothing comes up. And it's kind of astonishing that we've been using prediction for so long and that we haven't thought more carefully about what is a prediction and who, who should make them and what are the rules around it and how we should think about it. And are there cases in which we maybe shouldn't predict something, but also realizing that AI in particular machine learning, is really nothing but a prediction machine. What it does is it takes data it has and it projects it on data it doesn't have. And how much we are using AI to make very important decisions, even in context of justice, and yet still not thinking about what are the implications when it comes to prediction.
Tom Chatfield
Fantastic. And there's a wonderful line near the start of the book which really struck me when you say predictions are not facts. Facts belong to the present and the past. An assertion about the future can be many things, but never a fact. I found that really striking because I think it's sort of undeniably true. But also for me, it really confronts me as a reader with the fact that predictions are perhaps not what we think they are, that there's something there that's, that's a lot more complicated, that the way we're using them and thinking about them is often very misleading. But I wonder if you could say a little bit more about the sort of non factual nature of predictions. So what are they if they're not facts?
Carissa Valise
It's fascinating because they do sound like facts. If I say it's going to rain tomorrow, it's a very similar grammatical construction to saying it's it's raining. And we tend naturally to think that it's a description, but about the future. But when you analyze them as assertions, you realize that predictions are closer to speech acts. So the philosopher J.L. austin wrote this book called how to Do Things with Words. And he argues that many sentences might sound like descriptions, but actually they're doing something else. They're doing something in the world. So for example, when a naval officer christens a ship, they're not really describing the world as it is, they're christening a ship. When a civil servant marries two people, they're not describing the world, they're marrying to people. And in the same way, when I say something about the future, in particular when it's something about the social reality. So for example, if I say tomorrow we will be using AI agents for everything, it might sound like a description about the world, but actually what I'm saying is go out there and fulfill my vision of the world. Act in a certain way. So implicitly it's telling you what to do. So even though it sounds like a descriptive assertion, it's really quite normative.
Tom Chatfield
So I think we might as well get straight onto AI and then we can go back to ancient Greece. And I guess one of the facts about the world at the moment to observe the present rather than predict the future, is that we're surrounded by predictions. And some of these predictions are very dramatic. And one thing I notice is that we're having very bold, confident predictions that contradict one another. Some people are saying AI is going to change the world, it's going to potentially transform the economy, it's going to solve problems, or it's going to bring about the end of civilization as we know it. Other people are saying it's overblown, this is hype, this is overpromising. None of this is going to happen. So we have this kind of realm when a lot of very bold, potentially contradictory predictions are swirling around us. And one thing that interests me is, I suppose, based on your writing and thinking, how do you think we should handle this? When people go out there and encounter these impassioned, bold predictions, these, as you say, perhaps attempts to talk a future into being or to sell a particular kind of story, what should we do in response to this?
Carissa Valise
The first thing is, I think we should report differently on predictions. So often I read news articles that that report on somebody's prediction as if it was a fact. It's like, oh, this guy said that this is going to happen. And these are the implications of that. It's like, wait a minute, let's ask, who is this person? Why are they making this prediction? Does it come from data? What kind of data? Who collected the data? Why did they collect the data? What kind of data might be very important that is not being collected. Maybe because we can't collect that data, or maybe because we don't want to, or maybe because nobody has collected it before. And who stands to gain from this prediction if this prediction becomes true? And another question is, is that a future that I want to see? Is that a future that is in my best interest? And if not, what can I do to bring about a different future? Because the future is unwritten, and if it's unwritten, it means that we can intercede, that we can influence it. So this person probably wants it to go this way. And then you can think about what are their financial interests, what are their political interest, and where do I want to go and how do I get there? How do we work together to build the future that we want to inhabit?
Tom Chatfield
So in a sense, we should kind of read their predictions as evidence about their present incentives and interests. One of the things I love about the book, and there's a lot I love about it as a philosophy and AI geek, is the fact that, as the title says, it's from Ancient Oracles to AI. And we're back with the Oracle at Delphi. We're back with the kind of Roman civilization. There's lots of wonderful, rich material there. And I wonder if you could tell us a little bit about what prediction was and signified in the ancient world and what we can learn from this history, why you've put it so kind of richly into your narrative.
Carissa Valise
It's very helpful to look at prediction through history because we don't believe in many of the methods that they used to use. Even though sidekicks are still very popular. Probably nobody would go to the Oracle of Delphi in particular for any kind of answers. I mean, to Delphi itself. And that makes it a lot easier to realize what were the power plays behind all the divination, because we don't believe in it anymore. And that makes us easier. It makes it easier to learn some lessons that might still be relevant for today. And one of the marks of both ancient Greece and Rome is this obsession with divination it's very understandable. It's kind of nerve wracking not knowing what comes ahead. I think human beings are smart enough to know and to imagine how lots of things could go wrong and how it brings us a sense of safety to have the illusion that we know what's coming so that we can be better prepared. But that makes us very vulnerable. Vulnerable in particular to charlatans. Because there will always be someone, if you ask people tell me what to do, there will always be someone who's willing to tell you what to do, but who have their own agenda in that telling of what to do. One of the things I point out is when I thought about the Oracle of Delphi, very naively, before researching it, I imagined this kind of spiritual sanctuary of sorts. But it was a business. It was a business and it was a party. And these were merchants of prediction. And sometimes if there were ways to influence the priestess, if you didn't like the prediction they had made. So it was a much more human activity than I think a naive imagining might suggest.
Tom Chatfield
Talking about prediction as a human activity, you touch, I guess, on a neurological account of humans, of what the neuroscientist Daniel Seth has popularized is the idea of consciousness as a kind of controlled hallucination. It's a sort of bottom up process in which the, you know, the world is modeled and then that model is corrected by sensory data and so on. And I guess this for me points to the really interesting fact that as a species, one of our superpowers, if you like, is our ability to kind of learn richly from the past and imagine different futures to play out, stuff to model, stuff to learn. You know, prediction is, it's not facts. But if we are in our minds kind of collectively weaving facts into kind of potential patterns and models and stories, and so that undoubtedly is incredibly empowering. It's something that ought to be a great good. Much of our success is precisely about our ability to make plans, to harness for good rather than for ill, these tendencies. We can go this into a bit more depth, but I'm really interested, I suppose, in the sort of positive flip side that's woven through the book of how we can, you know, use our cognitive capacities to face the future. So hopefully of what it means not to fall into the kind of traps that you identify.
Carissa Valise
Yes. So I am not arguing that we shouldn't use prediction. In a sense, we can't avoid it and we shouldn't try to avoid it. We should just be a lot more learned about what is our Prediction and how to use it properly and how do we harness its power without falling into pitfalls and in particular pitfalls that can be very negative for democracy. And I love the way that you describe that because you mentioned imagining how the world could be and imagining different possibilities. And that is a related activity to predicting, but it's actually quite different because it's not about trying to figure out what is the future as if there was a written script and you're just trying to discover it. It's about having the creativity, imagination, bravery, curiosity to understand that the future could be many ways and that we have a role to play in which future comes about. And that is a very different attitude. So a lot of what I'm arguing for is a change of framework. So you can do something that looks very similar, but in fact are two very different mental activities. So for example, if you're an investor, you can try to figure out, try to game the system and try to figure out which way the world is going and try to invest in what will bring you more money. That's one way to do it. But another way to do it is no, I'm going to focus on what is a great product right now, that I can test it myself and I can acknowledge and recognize high quality. And it might be that from the outside both look like the same thing. And it might be that you're lucky and you get a lot of money from both of them, but actually you did something very, very different. And one of the lessons to learn from the ancient world is they were much more cognizant of the political and power related implications of predictions. For example, in the ancient world, in ancient Rome, at different times, it was illegal to predict the death of the emperor for the very simple reason that those predictions tended to end up with a murdered emperor on cue. And it just shows you how. It's not about not predicting, but it's about being aware of the secondary effects that predictions can have.
Tom Chatfield
I think one thing I felt with the kind of classical themes and you're linking them to the present is in some ways it seemed you're interested in what we might call epistemic virtues or certain positive attributes or behaviors that we might have in how we handle knowledge and uncertainty, to do with curiosity, to do with integrity, to do with, to do with hope, to do with empathy, and so on. This reminded me of your very fine previous book, Privacy is Power, which is, as you know, more better than I do about privacy in the modern world. But also as a reader, what struck me about that was that it was, I felt, quite an impassioned book that it advocated, I think, for the fact that it's legitimate and appropriate to have an intense emotional reaction to these things. And this idea, if I'm on the right track of sort of enabling virtues, strikes me as very interesting because in some ways that is the opposite of a quantified, consequentialist approach to the future. It suggests we kind of prepare ourselves by being certain kinds of people rather than preparing ourselves by adding up numbers. But I don't want to put words into your mouth, so I'm interested as to whether you think there's some accuracy in that. And if so, are there any particular virtues and attributes we think that it would be good for us to cultivate?
Carissa Valise
That's absolutely right. I think you put it much better than I did. I wish I had written that down to use it in future interviews. But that's exactly right. I think that it's a kind of mindset. And when you stop trying to predict, you become naturally a lot less of a utilitarian and a consequentialist because you realize, well, who knows what the consequences of this will be? I can try to guess them, but I'll probably be wrong. And the more I make predictions about the distant future, the more likely it is that I will be wrong. Some of the examples that I visit in the book is the examples of effective altruists and how they have made predictions in the past decade or so that have been pretty incredibly wrong. And yet now they are focusing on uneven longer term future of thousands of years, which seems pretty naive. This kind of, I don't know, being skeptical about a quantified approach to ethics has also come from the experience of writing privacy is power. When I've been talking about privacy too often, I get two questions that drive me up the wall. The first one is what is the future of privacy? As if because somebody knows something about the present or privacy, they know anything about the future. And as if the person who's asking had no role to play. And the second question is, is it worth it? Is it worth fighting for privacy? Are we going to lose this battle? And it's the sense of we're making it too easy for people with financial interests to disincentivize action. So if people only act when they have more than a 50% chance of being successful, then all you have to do to stop them from acting is make it harder for them to succeed. And that is a perfect recipe for cultivating complacency and for not taking care of the world around us, whether it's the natural world or the world of people we care about or democracy itself. And I think that a more principled approach to life and to ethics is a lot more successful, not only in terms of the effects that it can have, but also in the person that you can become and how you feel about both success and failure. When you do something out of principle and you fail, you have a lot of consolation in having done what's right. And this idea that you should only act if you have high chances of success is a kind of recipe for a much bigger kind of failure.
Tom Chatfield
So I think it's very interesting that you mention effective altruism, the kind of very influential, especially in tech circles, philosophical school that advocates for trying to calculate how we can do the most good with the resources at our disposal. And I suppose in concrete terms, I wonder if we can get a bit more specific here because you've spoken, among others, to the great Peter Singer during the course of this book, who is not an effective altruist, but is, I think, a great advocate for giving effectively his famous thought experiment. You see a child in front of you drowning in a pond. It will only cost you the ruination of your clothing to rescue the child. Of course you should do that. Therefore, you should also be taking your money, your resources, and if you can help a lot of people for a small sum, you should be doing that. And I'm interested in what you think the kind of root problem is with this kind of analysis. Because if someone says to me, okay, if you spend 10% of your income and put it into malarial anti. Malarial drugs, or you put it into this research, or you put it into this clean water supply, or you put it into this particular surgery that stops people from being blind in the future, you're going to be helping this number of people. You're going to be doing this amount of good. And I wonder what the kind of problem with that kind of reasoning is, or is the problem with extending that kind of reasoning too far.
Carissa Valise
It's a bit of both. It's partly about extending that kind of reasoning too far and assuming that we will know what the world will look like in 25, 100 or 1,000 years, that's pretty unlikely that you will know anything about what the world will look like. But it's also partly about not being epistemically humble enough to realize that you might not be understanding the whole picture. For example, one, one of the recommendations that effective altruists gave for a long, long time was to donate money to buy nets so that people could protect themselves from mosquitoes and therefore avoid malaria. Of course, this is a very noble objective, and I think nobody who is reasonable would object to it in principle. But it turns out that nets can be used in many ways, and often they were used to fish, and often that led to an overfishing, and it led to all kinds of problems that were not that easy to foresee. And so the problem is not having the good intention to do something good. The problem is assuming that you can predict the consequences and that you should decide purely on the basis of those predictions what you should do. So, for example, one of the blind spots or one of the criticisms that effective altruism has received is that it hasn't focused enough on justice. And if you were to want to change things, one way to do it is through strategic litigation. So you go into a lawsuit to try to change the law when the law is unfair. But often those lawsuits are incredibly hard to win. And if you just look at it from the point of statistics, why would you do it? It's not an effective way to use your money. But if people didn't do it, we would be stuck with laws from centuries ago and we wouldn't have that kind of progress. So I think the kind of mentality that is willing to try to defy the odds is a lot healthier for society.
Tom Chatfield
And does the same logic apply to, for example, an argument where someone says there's a very slim chance of an absolutely appalling event? Obviously, the philosopher Derek Parfit wrote extensively about this kind of scenario and has become something of a totem for people in the existential risk industry. But what's the problem if there is one, with people saying, well, there's a slim chance that very powerful AI will bring about an absolutely disastrous future for humanity. Therefore, we should be putting resources into mitigating and planning for, you know, this, this slim chance of a very bad event. Is, is that a good way of thinking about the future?
Carissa Valise
I don't think it is. And that's exactly what effective altruists are arguing for. And their argument is that, well, in the future there will be trillions of human beings. Today there are only billions of human beings. So if you compare the, the well being of trillions, that outweighs the well being of billions. And so we should focus on that because even if we just manage to decrease and the possibility that AI will destroy the world by a billionth of percentile, that will still be more of value than helping people today. But there are so many assumptions in that prediction that there is a question of whether it makes sense at all. One assumption is that we will develop AGI, which is an incredibly big assumption. And one symptom of just how questionable it is is how estimations of that probability vary. And the same with existential risk, the risk that AI will either destroy humanity or destroy enough of a proportion of humanity to be counted as a kind of catastrophe. People put the number at different places, but it's bad. Somebody like Geoff Hinton, I've heard him say something like there's a 20 or I think sometimes even a 30% chance of existential risk from happening. Whereas forecasters, superforecasters, who are people who typically tend to be better than the average at forecasting world events, put it at less than 1%. And that kind of suggests that we're not saying anything about the world, that we're expressing how much we don't know about it. And to put all your efforts into this prediction that has so many questionable assumptions seems unwise.
Tom Chatfield
So let's start moving towards some positives then. And in several places you quote the philosopher of science, Karl Popper, who famously argued that prediction is a dangerous game precisely because we can always find confirmation of favored theories by cherry picking data and evidence. And his response to this was then that what we needed to do was try and find faults in our theories, was try and create predictions that are, so to speak, kind of maximally allow us to test an idea. And this seems a very different model of prediction because it does entail prediction, but we're no longer making a prediction. I'm not saying, you know, my prediction is that my new drug is wonderful and I'm going to go looking for evidence that my new drug is wonderful. I'm going to make lots of money, it's going to be great. Everybody buy my new drug. Instead I'm doing a randomized controlled trial where I say, okay, I've got a new drug and I really want to know if it works. So what I'm going to do is I'm going to use a controlled experiment to test it blind in some people and not others. And I'm really interested to see whether I can show that it works against the possibility that merely stuff is happening at random. I've expressed that quite badly. But I wonder then, in scientific thinking and people like Popper, is there a different attitude towards prediction that taps into the kind of positives you're interested in?
Carissa Valise
Yes. However, the kind of mentality and method that Popper is describing is not the best if you're only looking for profit, which is often the case when it comes to some products. And even in the case of pharmaceuticals, you have pharmaceuticals doing randomized control trials, and if they don't come out the way that they expect, they don't get published. And even with randomized controlled trials, there are some doubts about whether we have the threshold right. Essentially, a randomized control trial asks, how likely is it that this result comes from chance alone, instead of tracking something that is related to truth? And what that threshold is is somewhat arbitrary. We just pick a number, and there's a question of whether that number is the right one, because there may be about 20% of results are going to be from pure chance alone. And that seems like, quite high. It's also a kind of different way of thinking about prediction, because it's a question that is answered with curiosity, that the search for the answer comes with curiosity. That clearly is not about trying to figure out the future. That prediction, the objective of that prediction is not to try to figure out the future as if it was a script. That you have to discover the function of the prediction is to try to discover whether you're wrong about something. That's a very different kind of activity that we're doing. One of the most important insights that Popper offered, in my view, is how one of the limits of science is that it cannot predict itself. When you are predicting science, you are doing something else, but you're not being scientific, because if we already knew that knowledge, then we would already have developed that. And the fact that we cannot predict the advancements of science means that we cannot predict history, because history is greatly influenced by the advancements of science. Popper is very important because not only was he a brilliant philosopher of science, but he also lived through the Second World War. And so he's very mindful of how predictions get used in the sphere of politics to justify the unjustifiable and to try to convince people with a pseudoscientific discourse of things that have much more to do with authoritarianism than science.
Tom Chatfield
Yes. And that, I suppose, brings us back to the idea of power. And I think one of Popper's most famous lines is about tolerance and intolerance. But I guess the Popperian vision of a society is a free society in which people can attack and oppose and disagree openly, precisely because only honest reason, disagreement is going to be adequate to react to unpredictable circumstances. We can't lock ourselves in in to a particular kind of dogmatism or ideology. And you have a lovely line where you say predictions are commands disguised as descriptions. The more we allow companies and governments to use them, the more our future is being decided by these people. And I wonder if you could say a bit more about that, because it seems to speak kind of to the present moment with a rather kind of alarming acuteness.
Carissa Valise
Yeah. If you read the newspaper, a good proportion of what you read there is not about the present. It's about speculations about the future. And if you want to become famous as an academic, write a paper about what the future is going to be like and put a number on it, just any number. Even if you almost fabricate it, it will make you go a long way in becoming famous. And when we believe these predictions uncritically, what we're doing is essentially giving up our autonomy and our rights and freedom and ability to come up with a future that we want to live in and following someone else's vision of the future. Another philosopher that is very relevant here is Hannah Arendt, because also she lived through the Second World War and also had this observation about how prophecies get used to. And that is especially important now because we are being subjected to so many predictions. Predictions every time you ask for a loan, an apartment, a job, every time you pass through any kind of public system, essentially every time you go to a hospital in an emergency setting, every time you go online, pretty much. And Han Arendt has this incredible line about how it makes no sense to argue with a potential murderer about whether their future victim is dead or alive. The only appropriate response is to rescue the person whose death is predicted. And today we have some tech executives like Larry Ellison and others, predicting the death of democracy, predicting a surveillance state in which we will be on our best behavior because we're being watched all the time. And the only appropriate response is not to argue with that, but to rescue democracy and to ask, say, that's not the future that I want, want. And there's a lot to do to make sure that that doesn't happen and that the future that I want is the one that becomes true.
Tom Chatfield
So to end by, I guess, talking about the role of technology in that present moment, I think you tease out a really interesting irony of large language models by pointing out that although they are very disruptive technologies, in a sense, they're essentially conservative because the algorithms are selecting that which has worked before. And I guess that felt to me like a really kind of usefully disruptive observation when it comes to the prophecy that these machines are going to change the world, discover new things and so on. When it comes to large language models, people would have had the experience of talking to them. They are prediction machines. They are based on, next token, prediction. And yet, despite the apparent simplicity of this, they are incredibly eloquent, convincing, powerful, insightful, unreliable as well, perhaps. What do you feel about things like large language models, about how they do what they do, about what's going on in there? What do you think? Not going to happen next. But what do you think they represent now as a force in the world?
Carissa Valise
They represent the force of bullshit. And I say this in the philosophical sense of Harry Frankfurt. So Harry Frankfurt defended that bullshit is much more dangerous than lies in the context of democracy, because the truth teller and the liar are essentially playing the same game. They're just on opposite sides of the court. But everybody has to know what's true and what's a lie. To even lie. The bullshitter doesn't care. They just say whatever works and they don't attend to the rules of the game. And that is essentially what is a large language model. They're built to be plausible, to be satisfying to a human being, to get us hooked. They are not built to be truth tracking or to discover new facts about the world, or to be empirically grounded in any way. And that kind of mirrors the situation that we have in politics right now, where it's become quite common and popular and effective to just say bullshit that has no relation to the truth or to any kind of data. It's just whatever is going to work to move your audience in whichever way you want to move them. And what I would hope, and I'm not making a prediction, it's more what I would like to see in the world is for people to innovate, to see how these systems are very limited, they're very dodgy in many ways. They are bound to entrench sexism and racism and other bad tendencies because they work with databases of the past and because all of these isms, racism and sexism and other isms are a kind of prediction in themselves. And so predictions have these cycles, vicious cycles that further entrench them. And what I would like to see is people innovating because we deserve better AI, better products and products that can sustain and support democracy instead of eroded. So, for example, why do we allow these systems to use the first personal pronoun? That's so misleading. There's no one there, there's no one on the other side of the screen. And yet it hijacks our normal emotional responses because as you say, they are very convincing and they sound very kind of. They sound as if there was a creature there. So one of my favorite cartoons about this is a cartoon in which there is a person who tells the computer, tell me you're alive. And then the computer goes, I'm alive. And then the person goes, oh my God. It's a bit like that kind of illusion that we've made them very plausible and then we're very surprised that they're very plausible.
Tom Chatfield
So to finish up at the end of the book, you have 10 pieces of advice, really, which I think are kind of practical, actionable things that people can go and do tomorrow. My favorite one, which was a hard pick, was this idea of increasing serendipity, of exposing yourself to serendipitous connections and opportunities precisely so that you don't get sort of locked in a diminishing pattern. I don't know for you what advice you might pick from there to give listeners to sort of walk away from this podcast and do things slightly differently or more thoughtfully tomorrow if they want to inhabit the future a little more, hopefully.
Carissa Valise
I think I would go back to some of the virtues that we were talking about and it has to do with having curiosity about the world, having an open mind, I think also being playful and looking for comedy and instead of taking these profits that seriously, laughing a little bit about what they say when they say something that doesn't make a lot of sense. And it also is about reading widely and talking with people who are extremely different from you and maybe talking to strangers, doing something out of character, trying to design something from scratch. Does a chair need to look like that? Or could it be very different than what we usually think of as a chair? And in general just this metaphorically attitude, metaphorical attitude of taking strides along the beach because you never know what the tide might bring.
Tom Chatfield
Fantastic. I think it's a wonderful note to end on. Thank you very much for a fascinating conversation.
Carissa Valise
Thank you so much.
Mia Sorrenti
Thanks for listening to Intelligence Squared. This episode was produced by me, Mia Sorrenti and it was edited by Mark Roberts. For ad free episodes and full length recordings, you can become a member@intelligencesquared.com membership and if you'd like to join us at future live events, you can find our first full program. Buy tickets over@intelligencesquared.com attend. You've been listening to Intelligence Squared. Thanks for joining us.
Episode: How Is Predictive AI Shaping Our World?
Guests: Carissa Véliz (AI Philosopher, Oxford), Tom Chatfield (Host, Technology Philosopher)
Date: May 7, 2026
This episode explores how predictive AI systems are transforming notions of power, responsibility, and human agency. Carissa Véliz, author of Prophecy: Prediction, Power and the Fight for the Future, examines the parallels between today's computer scientists and the oracles of antiquity, questioning how ceding ground to predictive machines can erode our role in shaping our own destinies. Joined by technology philosopher Tom Chatfield, Véliz discusses the ethical, historical, philosophical, and practical aspects of living in a prediction-driven world.
[03:09]–[06:16]
[06:16]–[08:43]
[08:43]–[12:31]
[11:11]–[14:45]
[14:45]–[16:10]
[18:49]–[22:30]
[22:30]–[25:04]
[25:04]–[29:16]
[29:16]–[32:10]
[32:10]–[36:00]
[36:00]–[37:34]
Predictions as Non-Facts:
“Predictions are not facts. Facts belong to the present and the past. An assertion about the future can be many things, but never a fact.”
(Véliz, 04:21)
Prediction as Speech Act:
“It might sound like a description about the world, but actually what I'm saying is go out there and fulfill my vision of the world. Act in a certain way.”
(Véliz, 05:41)
Epistemic Humility:
“We should just be a lot more learned about what is our Prediction and how to use it properly and how do we harness its power without falling into pitfalls and in particular pitfalls that can be very negative for democracy.”
(Véliz, 12:39)
Effect on Agency:
"When we believe these predictions uncritically, what we're doing is essentially giving up our autonomy and our rights and freedom..."
(Véliz, 30:26)
Bullshit and LLMs:
“They [LLMs] represent the force of bullshit. And I say this in the philosophical sense of Harry Frankfurt...they are not built to be truth tracking or to discover new facts...”
(Véliz, 33:29)
Democracy and Agency:
“The only appropriate response is not to argue with that, but to rescue democracy and to ask, say, that's not the future that I want, and there's a lot to do to make sure that that doesn't happen and that the future that I want is the one that becomes true.”
(Véliz, 31:36)
The Dangers of Utilitarian Inaction:
Véliz’s reflection that an overemphasis on success probabilities leads to widespread societal complacency (17:10–18:49).
On Charlatans and History:
The comic-business nature of the Oracle at Delphi as an early example of how predictions can be wielded for profit and manipulation (09:22–11:11).
Popper and Unpredictability:
The episode’s linking of Karl Popper’s scientific skepticism with the unpredictability of societal and technological advancement (25:04–29:16).
The Lure of Plausibility:
Véliz’s insistence that LLMs are built to sound convincing, not to be truthful, and that design choices (like first-person pronouns) intentionally blur the line between person and machine (33:29–36:00).
This thoughtful episode is a compelling call to reclaim our imaginative agency, challenge the predictions that shape our world, and cultivate the virtues needed to build the future we want—rather than simply the one we are told awaits us.