Loading summary
A
This is Philosophy Bytes with me, David
B
Edmonds and me, Nigel Warburton.
A
Philosophy Bites is available at www.philosophybytes.com. in the brave new world of AI, predictions are increasingly shaping our lives. The Oxford philosopher Carissa Veliz, whose book Prophecy has just been published, argues that we should reflect on how these predictions work and how they affect us.
B
Carissa Velis, welcome to Philosophy Bites.
C
Thank you for having me, Nigel.
B
The topic we're going to focus on today is prophecy. I'm not quite sure what that's got to do with philosophy. Could you say what the link is?
C
There are many links. One, to philosophy of language. What is it that we do when we make a prediction? And two, what's the ethics of prediction? How should we use them? When is it okay to use them and when is it not?
B
So let's start with the first one. When somebody makes a prediction about the future, they prophesy what's going to happen, what's going on philosophically there.
C
I think predictions as sentences are very misleading because they sound like descriptions about the world. I think it's intuitive to think that when somebody makes a prediction, they are describing the way the world will look in the future. But actually, when you assess how predictions are used in the public sphere, you realize that very often, and in particular when it comes to social predictions, predictions are closer to verdicts or commands. So for example, when a tech executive says, tomorrow we will be using AI for everything and everywhere, he's not trying to figure out how the world is going to look tomorrow. He's trying to get you to act in a particular way. He's telling you, go out there and buy my AI and fulfill my vision of the world, the one that happens also to line his pockets.
B
So we're confusing sometimes prophecy with a sales pitch.
C
Yes, but also with orders. And when we believe a prediction unreflectively, it's closer to obeying an order.
B
Couldn't it be the same thing though? The tech executive might genuinely believe that the future lies where they think it lies and they're telling you something true. They believe about the future and it also happens to lie in their pocket.
C
It could be that they believe it. But sometimes predictions have this effect of becoming self fulfilling prophecies, even when they the person who utters the sentence doesn't intend it to be that way. So for example, I might believe that my student is really bad and I might tell them, I think you're going to do terrible in this exam. And it might be that that sentence itself influences the future in a way that they do have a very bad exam. The interesting thing about self fulfilling prophecies is that they are something like the perfect crime, like a murder weapon that disappears upon striking. Because they don't create error signals. We never get the data of how that student would have fared had I encouraged them rather than told them a very bad prediction.
B
So prophecies can be true, but in the case of self fulfilling ones, they can actually cause the event rather than be something outside that chain of cause and effect. Just an observation. But they look like observations.
C
Exactly. The link between the philosophy of language and the ethics is that when it comes to predictions about anything having to do with the social world, the prediction itself tends to affect that which it's predicting. And so even if you get an accurate prediction, often it's at the cost of creating the reality you're purporting to predict. One interesting example is that there is a particular day in Japan that is considered to be a bad luck day. And this is cultural. And that day more people die even from natural causes than any other day. But it only happens in Japan. Our psychology affects us. It affects how we eat, how much stress we have, how we interact with the world. And by changing expectations with a prediction, you can change reality. This phenomenon can be seen also in the financial world. A financial agency making a prediction that a company or even a country will do badly can be enough for investors to flee and make it go badly.
B
Are you saying that all prophecy has this tendency to be self fulfilling? Because that would be quite a useful thing to know. I could start making all kinds of prophecies that serve me very well and achieve lots of things I want to achieve.
C
Not all prophecies. If I make a prediction about the weather, it's not going to influence whether it rains at all. But yes, I think all social prophecies have this type of tendency to be a magnet. Now that doesn't mean that it's enough. So you get situations, for example, the fyre festival, in which you get this founder making outlandish, very optimistic predictions about his own business, because he wants to make it the case, but he doesn't have anything to back it up. He doesn't actually do what you need to do to get a festival. And of course he crashes into a wall spectacularly. So that there is this force in social predictions, I call it like a magnet, because it's a kind of force of attraction. It's true, but it's not always enough.
B
So the prediction might bump up against the wall of reality at a certain point, but it also can chip away at it in subtle ways.
C
Yes. So people like Elizabeth Holmes in Theranos had these very optimistic predictions about making a machine that would test blood and that you only need a drop of blood instead of getting a whole syringe. But she didn't advance the science, and in a way, it did influence reality. It got her a lot of attention, it got investors interested, and it got people to give money, and it got people to try it out. But because there wasn't this other side of reality, reality never caught up with a prediction. She ended up crashing into a wall.
B
Now, you mentioned this is a phenomenon within the philosophy of language. Is there some kind of framework within which you can describe this phenomenon that draws on philosophy?
C
Yes. So J.L. austin had this idea that we do things with words. He wrote a book called how to Do Things with Words. And the main idea is that some sentences don't describe the world, but do something else. So when government official marries a couple, they say, I pronounce you husband and wife. What they are doing is marrying two people. They're not describing the world or when a naval officer christens a ship, they're doing something with words. And in the same way, when someone makes a social prediction, they are issuing a command that sounds like a description of the world, but in fact it's prescriptive because it implicitly tells you what you should do.
B
What are the ethical implications in our world today? Because there are lots of predictions which affect our lives in different ways, it
C
has a lot of ethical and political implications. One thing that drives me up the wall at the moment is how journalists tend to report on the predictions of important people like Elon Musk or Bill Gates, as if the prediction was a fact, and people respond accordingly. And. And that gives these people even more power. So one of the ethical implications is to make you stop and think, okay, who is this person? Where is this prediction coming from? Is it coming from data? What kind of data? Who collected the data? Why? Who is going to benefit from this prediction if it comes true? Is that the future I want to see? And if not, can I do something to intervene and intercede? And it's particularly important when we're talking about the prediction of single human beings. So when a prediction about a loan or a job or an apartment or insurance is made by a machine like AI, which for the most part, we're using machine learning, which is just a prediction machine, it takes data it has and it projects it onto data it doesn't have. And when we make a prediction about an individual, we're having a huge ethical effect on that person's life. There are questions to be asked that are not being asked at the moment as to whether we should have some guardrails in place to make sure that we don't commit injustices.
B
So there are two things there. First of all, you were talking about the exaggerated influence of some very powerful people in the tech world. And the second thing that you were talking about was the way in which the algorithmic world affects the lives of ordinary people in terms of bank loans and possibilities of getting a job, being shortlisted for a job, or whatever it is. Let's take the first one, the exaggerated importance given to somebody like Elon Musk. David Hume was really interesting in his essay on miracles about our willingness to believe in unlikely things. And Elon Musk is famous for his talk about why we need to move to Mars and how we're going to get people to live on Mars, and seems to have taken a lot of people along with him. Perhaps you could say something about that.
C
Yes, that's one example among many. But you might have the effect of people giving up on Earth and thinking that, well, we've messed this up, but we're gonna do it better in Mars. It might be completely unrealistic and just making things worse because people are not taking care of Earth as well as they should and could. And if people were to be taking better care of Earth, we would probably fare a lot better than the prediction of Elon Musk.
B
For me, channeling huge resources into polluting industries which will power rockets. To go that far, which could only take a tiny fragment of humanity there, rather than trying to deal with the climate catastrophe we've created on Earth, seems a perverse way to go. But you're suggesting there's something about his position as a tech big guy and the richest person on Earth that gives him some kind of better insight, apparently, in the wider world than he deserves.
C
Absolutely. And we've seen this for a long time. People who had access to the Oracle of Delphi, or to the astrologer in turn, or to Rasputin, were given more credence. The Mars example might be more of a niche example, but how many tech executives have we heard saying that we shouldn't worry too much about climate change or anything else, because AI is going to solve everything and we're putting all of these resources in what essentially is a promise that hasn't panned out for decades.
B
So could you rephrase Those kind of prophecies as gambles rather than prophecies.
C
Yes, it's very important that we do because at the moment I don't think we're understanding how much of a gamble this is. We are taking it as if it was a fact. I am so surprised at how many high level debates I have on stage with people when I criticize something about AI and in all seriousness, they say, oh, but AI is going to fix that. It's going to get better. It's not going to have that problem in two years. All these kinds of predictions that are essentially wishful thinking. When are we going to learn that this is a gamble? And when are we going to appropriately assess how risky this gamble is for democracy, for ecology, for all kinds of implications, including justice. So let's go back to the issue about justice. Let's say you apply for a loan. I'm the bank. If I have very clear criteria of what you need to get that loan, you can challenge me if I'm wrong. So if I say, Nigel, you need X amount of pounds for me to give you that loan, either you have them or you don't. If you do have them and I made a mistake, you can show me that you have them and I can change my decision. But if I reject a loan application that you make on the basis of a prediction, there's no way you can contest it. Because if you it's not a fact and therefore it cannot be false. It's unverifiable and unfalsifiable. And we are using predictions in all these contexts of the justice system of loans, of jobs in which we tend to think that it should be about merit and we're essentially undermining due process.
B
Isn't this just a question of the weighting of probabilities? Although you can't be certain that I won't have enough money in the future to take out this mortgage or whatever. It's based on probability and there could be quite a high chance that that is true and there may be victims of that system, but for a company, it makes sense. This is what insurance is based on. You assess the likelihood of outcomes. It's not that every outcome has an equal likelihood.
C
Yes, it makes sense for a company, but it doesn't make sense for society. And society is made up of citizens and companies. And in the end, if we undermine the principles of justice and companies will suffer for it as well in the future because we are taking on a lot of risk. Take the case of insurance. It makes sense for every insurance company to be very risk averse. But that would make essentially many individuals uninsurable. And if you have many individuals who are uninsurable, they will shoulder a lot of risk, which in the end is going to lead to something like the financial crisis of 2008, and then everybody suffers with them. So there are reasons for why we should be more worried about principles of fairness and about having minimal standards of safety for people than only profit. We were discussing before about whether there is a point that there is a statistical probability about whether somebody will pay back a loan or not. But that accuracy gets gravely affected when you don't give the person loan, because the data that will get produced from that decision will validate your decision that that kind of would not have paid back the loan, but you didn't give them the opportunity. And so you further push them into economic disadvantage. So you get these cycles, these vicious cycles that confirm your data, but at the price of you influencing it to go that way. We are making predictions about 20 years, 50 years, 1,000 years, sometimes in the future. And one of the concerns is that the future is utterly unpredictable. And not only that, but the most important events are the ones that are most unpredictable. And that that kind of mentality and calculation will justify absolutely anything. So one very good example is effective altruism. It's the kind of mentality that does what we were talking about, how Musk is projecting into the future and thinking about Mars, while effective altruists are thinking about what the world is going to look like in a thousand years, and they're projecting a population of trillions of people. And because trillions of people outweigh the benefits of billions of people who live today, we should prioritize those trillions of people according to effective altruists. And what that means is that it can justify literally anything. Once you include infinity into the calculation, or something close to infinity, then you lose all sense of proportion, because in the infinity of time and space, everything is just a blip. Whether it's mass murder or whether it's saving a life, it doesn't really matter in the infinity of it. And effective altruists have been very influential with tech people who have so much power in how they are designing technology and the influence it's having on us.
B
Is there a plausible alternative then to succumbing to the predictions of powerful people or powerful systems? What's the alternative? Is it just say we don't know what's happening next? We'll do our best.
C
It's about tethering our mind more to the present than the future, and making decisions on the basis of facts and what we know and what people deserve based on what they've done already and who they are. If you go back to ancient Greece, it was a society that was truly obsessed by divination. And in that context, a lot of myth arose and philosophy was born as a reaction to that kind of mentality. And just like jewelweed tends to grow next to poison ivy and is an antidote to it, I hope to see more philosophy as a reaction to so much essentially bullshit about the future and so many charlatans and false prophets. And philosophy is about being critical and asking, what is a good life and what do we need to do to build that? Instead of trying to discover a script, our job is to write it. I think there are good ways of thinking about the future, and good ways of thinking about the future involve imagining what it would be like to have a better future and figuring out how to get there, but it wouldn't be cashed out as a prophecy.
B
Christophers, thank you very much.
C
Thank you so much, Nigel.
A
For more Philosophy bites, go to www.philosophybytes.com. you can also find details there of Philosophy Bites books and how to support us.
Date: May 11, 2026
Host(s): David Edmonds & Nigel Warburton
Guest: Carissa Veliz, Oxford philosopher, author of Prophecy
In this episode, Carissa Veliz explores the philosophical dimensions of prophecy and prediction, especially in the context of AI and algorithmic decision making. The discussion covers how predictions function linguistically and ethically, the dangers of self-fulfilling prophecies, the undue influence of tech leaders, justice in algorithmic decisions, and the pitfalls of speculative future-oriented thinking in philosophy and society.
Power of Self-Fulfilling Prophecies (01:49–03:12)
Limits and Force of Social Predictions (04:13–05:57)
Power and Accountability in Prophecy (06:50–09:37)
Justice, Due Process, and Predictive Algorithms (10:45–12:38)
Societal Risks and Virtuous Cycles (12:38–13:52)
This episode critiques our societal tendency to mistake powerful voices and algorithmic outputs for oracular truth, urges skepticism of predictions (especially those that serve vested interests), and argues for a more grounded, philosophically engaged approach to both the future and to justice. Veliz advocates for reflective, present-focused decision making, and for philosophy as an antidote to prophecy’s seductions.