
Loading summary
Alex
If prediction is the basis for today's
Host (possibly a co-host or interviewer)
cutting edge AI, shouldn't we examine the nature of prediction itself? Let's talk about it with Oxford philosopher Carissa Veliz right after this.
Alex
This week I'm live at Knowledge 2026, ServiceNow's annual conference in Las Vegas, where enterprise AI moves from promise to production. I'm sitting down with ServiceNow's president and CPO Amit Zaveri on the platform strategy, powering it all, their people and technology leaders on what AI means for the workforce, the engineering team behind ServiceNow's Nvidia partnership, and what it really takes to AI at Scale and Ultra Beauty on deploying AI across 1300 stores. These are the conversations you won't hear anywhere else. And new episodes are dropping. This week on my YouTube page, we've all heard the stat 95% of AI initiatives fail. It's not because the technology isn't ready. It's because you don't have the right process or the right partner.
Host (possibly a co-host or interviewer)
Meet a board.
Alex
Aboard is your partner for AI transformation, which means they listen, use their very own powerful software tools and deliver exactly what your company needs to thrive in
Host (possibly a co-host or interviewer)
the age of AI working with big
Alex
and small clients, Aboard always delivers in weeks, not months. Your AI revolution is just beginning. Visit aboard.com to get your AI rollout right. Welcome to Big Technology Podcast, a show for cool headed and nuanced conversation of
Host (possibly a co-host or interviewer)
the tech world and beyond.
Alex
We have a great show for you today. We're here with Oxford philosopher Carissa Valese who has a new book out this week called Prophecy, Prediction Power and the Fight for the Future.
Host (possibly a co-host or interviewer)
From ancient oracles to AI talking all about prediction and what it means for our society. And prediction really is everywhere in our society. Wouldn't you agree, Carissa?
Carissa Veliz
Absolutely. Thank you so much for having me, Alex.
Host (possibly a co-host or interviewer)
You bet. I mean, I was thinking about it when I saw that your book was coming out. I said we have to have this conversation because.
Alex
And we're going to get into the
Host (possibly a co-host or interviewer)
AI stuff in particular in a moment, but just from a big picture standpoint. I mean, everywhere we look today we're trying to predict everything, right? AI, of course, or generative AI is the nature of predicting the next word. We also have the older versions of machine learning which has lots of different predictive capabilities, predicting whether you're likely to default on a mortgage. And then of course we're like in the middle of this prediction market mania. What is happening?
Carissa Veliz
Exactly as you say, it's everywhere. Prediction sounds like the holy Grail for everyone. Everyone wants to know what's around the corner because everyone's anxious about the future. That's where we will all be spending the rest of our lives. And whoever can get a glimpse of the future has a competitive advantage. But that script alone makes a lot of assumptions that are very problematic because it seems to suggest that the future is written. And our task is to discover what's there, to kind of discover this script that has been written for us. But actually the future isn't written. And even though it's frightening, the most important events in your life, in your personal life, but also in your business life and in our lives as a society, are the ones that are the most unpredictable. So it's very easy to see what's ahead when the road is straight. It's the curves that are really hard to see and in some cases impossible. And those are the ones that will change your life.
Host (possibly a co-host or interviewer)
So you're saying that, okay, so we have this world where algorithms are making all these predictions that could influence us, it could steer us. And are you saying that we should just do away with those predictions or we should be mindful of the fact that there might be something hidden underneath? Because there clearly is.
Carissa Veliz
Yeah, I think we should be mindful. I'm not saying that we should do away with predictions. I use them. And in a way, predictions are part of how we make decisions, but we should be much more enlightened about it. I think we're being so incredibly naive. And in some cases, sure, we shouldn't use prediction. Let me give you an example. So take the justice system, or any system in which we really care about fairness, in which fairness should be the value that is more important than efficiency or, or than profit. In those cases, it's very tricky to use predictions because when you predict that somebody's going to fail, you affect their lives. So say you use an algorithm to determine that someone is unemployable and you don't give them a job. But because everybody's using more or less the same algorithm, trained more or less on the same data, that person will never get a job. And the company that runs the algorithm is going to say, oh, see, Our algorithm is 99.9% accurate. But it may be producing that accuracy through creating the reality that it's purporting to predict, rather than that person really being unemployable. And here's the interesting thing. Self fulfilling prophecies are like the perfect crime, because it's like a murder weapon that disappears upon striking. It leaves no record, it creates no error signals. We will never know how that person would have fared because they will never get the job and that data will never get collected. And so it seems like nothing untoward is happening, when in fact it's great unfairness may be happening and being covered up.
Host (possibly a co-host or interviewer)
So you're talking in that example specifically about like AI filtration of resumes through job sites.
Carissa Veliz
Exactly. Yes.
Host (possibly a co-host or interviewer)
Okay, here's my pushback on that one.
Carissa Veliz
All right.
Host (possibly a co-host or interviewer)
I think that that example leaves out the agency of people to bend the job application process to their will to a degree. I think if we are just at the mercy of job application portals, then I would say, sure, this stuff is probably bad, but isn't there? And I think this probably applies to all your arguments. And so let's have it out, let's have it out. Isn't there the ability of people to just be like, I don't want to be at the mercy of this algorithmic job portal. I'm going to write straight to the hiring manager and make the case myself. And sell yourself outside of this algorithmic filtration thing that the hiring manager knows will miss people and that the workplace in need understands is imperfect.
Carissa Veliz
Yes and no. For example, I've met someone who is really good at their job. They are computer scientists, but every time they apply for a job through the normal procedure, they get filtered out and he doesn't know why. Something in his CV that makes him look quirky and algorithms don't like quirky. Then people get to know him and he gets offered these high paying jobs from the same companies. However, there are many systems in which we're not allowing that leeway anymore. So there are many systems in which you try to find the email of the manager and you can't find it. More and more we're being limited to these automatized processes. And that leeway that is so important that you're talking about is disappearing a bit. That's one side. But the other side is you might have people who are brilliant at what they do, but they don't have that kind of personality of looking for the manager. And they might be a particular kind of nerd, right, who may be a genius at, I don't know, programming or a genius on writing, but they're socially not as savvy to try to break the system. And society wants that talent. We're missing out on important talent when we streamline everything.
Host (possibly a co-host or interviewer)
But isn't that to a degree encouraging passivity? Think about the example of like, well, their email address might not be Listed. I think that, you know, think about how long it takes to filter and sorry to the hiring managers, because if people listen to this, your email inbox is going to get blown up. But I don't actually really feel that bad about it. But for instance, like, we're talking again about algorithmic hiring, going through these processes. It's arduous. I think you could spend half the time guessing email addresses until you get the right one, so to say. Let me just throw it out there to say these AI algorithmic systems. Shouldn't be used because of this unfairness. Maybe you have actually a better advantage if you do try to break out of the system and be active a little bit and decide not to be at their whim. People have agency at the end of the day.
Carissa Veliz
Again, you're assuming that you can break out. But even if you're right, that you can break out, the other side of the coin is that actually you're incentivizing something like stalkings. So the guys that will end up getting those jobs are the ones who are most insistent, who are most willing to break the rules sometime. And one of the concerns I have, I don't know what it's like in your world, but in my world in academia, I think we have a serious problem of fraud of people who are very well known and who have been very successful and who have fudged their data or who have committed other kinds of academic fraud. And it's precisely this kind of people, very active, very insistent, is this kind of profile. And I don't think we should incentivizing that either. And I think we should get the best of both worlds. So we want the active people. We want to have a system that encourages them in the right way. And we want the people who. I wouldn't call them passive, but who have other kinds of talents. You know, some people are introverted and they tend to have different kinds of talents than the extroverted. And to put all our betting coins on the extroverts is, I think, losing a great pool of talent.
Host (possibly a co-host or interviewer)
Okay, first of all, I'm definitely not encouraging stalking.
Carissa Veliz
No, no.
Host (possibly a co-host or interviewer)
I think this can be done outside of the realm of stalking. And I also think that you don't necessarily need to be a fraudster to go make your case outside the system.
Carissa Veliz
Absolutely not. But it's the kind of incentive that attracts that kind of profile sometimes.
Host (possibly a co-host or interviewer)
Yeah. I think we shouldn't let this sort of take away from your broader point here, because I have seen these systems. I mean, I've been Lucky enough not to have to apply for a job for a while. But I have friends who have gone through these processes and I'm kind of stunned at what job hiring software looks like today. They filter for personality. And I mean, I understand for an employer to want to have some indication of what somebody's personality is like, but they do it to a degree where it's like you have a great candidate in front of you and like one little misstep on a multiple choice and a poorly worded question filters them right out of the pool. I think that's actually a bad thing for employers as well.
Carissa Veliz
Yeah. Or using AI to read people's emotions in an interview. There are so many assumptions and so many glitches in technology that is very, very questionable. Another really interesting example is loan applications. So if I'm a bank and you apply for a loan and I have clear criteria about what you need to get X amount for a loan, those are verifiable facts. So if I say, Alex, you need $10,000 in your bank account to get this amount of loan, either you have them or you don't. If I reject your loan, but you do have the $10,000, you can prove me wrong and then we can solve it. But if you apply and I reject your application on the basis of a prediction, there's no way you can contest that because predictions are not facts. At best, they're educated guesses. And because they're not facts, you cannot prove it to be false. And so it's a way to shroud a lot of injustice and to lessen accountability.
Host (possibly a co-host or interviewer)
Okay, for the sake of argument, let me now take the bank side.
Carissa Veliz
Yeah, absolutely.
Host (possibly a co-host or interviewer)
There are great machine Learning companies, like C3AI, for instance, that will evaluate mortgage applications, for instance, and they sort of put you in a category, and forgive me if I don't get this exactly right, but that's what my research points me to, is that they'll put you in a category in terms of likeliness to pay back a loan. Green. Very likely Yellow. All right. Kind of borderline red. Statistically, you probably won't pay it back. If I'm a bank, my job is to put money out and recover it. That's the whole point of being, let's say a mortgage officer, is to loan that money out and do it with a high degree of confidence that you're going to get that back. And because that exists, the mortgage system can't exist because we give people all this money that they otherwise wouldn't be able to obtain to buy A house. So if a bank can use this software to be able to determine or be able to do this job more effectively through prediction, what's the problem?
Carissa Veliz
Even though banks are businesses and we want them to do well, we depend on them to do well, really. I mean, we can see the financial crisis in 2008 and what happens when sometimes they mess up when that doesn't happen. It's also a very important opportunity. To give a loan to someone is life changing, or to deny a loan to to someone is life changing. And so there are also considerations of fairness going on. So if you have an algorithm that is not very accurate and that is not very fair, but it's profitable enough, if it were just about profit, then it would be fine. And there are some areas in which frankly, it's just about profit, like maybe retail, and that's fine. But because in this area, it also has to do with life opportunities when you scratch the surface of those algorithms. For example, the markup had a very long story a few years ago about how two people who had applied for a mortgage had been denied. And when the markup investigated, their file looked exactly the same or very similar to other two people who happened to be white, and it turns out that they were black. You start getting all these correlations that are very unfair when you have clear and contestable criteria. One of the important things, there are two important things. One is that it's usually causally related to whatever you want. So if you have $10,000 in your bank, that means that you've probably been good enough to save, and that means that your likelihood of paying back this amount of loan is high. But it's a causal relation because sometimes machine learning picks up on spurious correlations. If you have three credit cards, you're more likely to pay back because it just happens to be that people with three credit cards have had better luck paying back. But the other really important thing is that if you don't fulfill the requirements, you know what to do to change a decision. So if you have only $9,000 in your bank, you know that you need 1,000 more. And so you know exactly what to do to get the kind of answer that you want. When it's a black box statistical pattern matching, you have no idea what you need to do to get the loan. And in some cases, the best way to get the loan would be to have a different race. And that seems not only unfair, but also irrational in some way.
Host (possibly a co-host or interviewer)
Yeah, so first of all, it's great having you here because recently, especially We've had a lot of people from industry on the show and I always love to feature the critics because it's important to hear your voices and talk through. And so don't take my pushback here as me being a stand in for industry. No, absolutely, we need to have this conversation. And the same way I'll ask sort of probing questions to the people in industry, I'm going to ask you some more. So let's just keep going with this.
Alex
By the way, this is all sort
Host (possibly a co-host or interviewer)
of like old school machine learning, which is predictive. We're going to talk about more of the generative AI side of things in a moment. But let's keep going with this because it is rich and interesting to talk through.
Alex
There really is a question.
Host (possibly a co-host or interviewer)
The question is, again, if this system helps a bank do a better job, shouldn't the answer be instead of throwing it out, investigate it for bias. If it's biased, fix that bias. And if not, let it run. Like for instance, I'll just talk through this three credit card example. Okay? People with three credit cards, for whatever reason are better at paying their bank back on the loan. Now it might seem like totally like irrelevant, but at the end of the day, if you have three and you're, that's a statistical correlation to that, you'll, you're more likely to pay the bank back then that's actually maybe an additional loan that they could make that they wouldn't make otherwise if they didn't have that data. So instead of saying this system is rotten, throw it out. Shouldn't the right response be investigated for bias and inaccuracies, but overall maybe keep it?
Carissa Veliz
Well, there is value in that for sure. However, even if you investigate for bias and keep it, there's still the problem that the prediction will affect that life. So if you don't give someone the loan, they will do financially worse. And then you can claim accuracy. But accuracy at the price of creating that reality is not what we're looking for. It's not the kind of accuracy we're looking for.
Host (possibly a co-host or interviewer)
You can't give everybody a loan though.
Carissa Veliz
No, you can't give everybody a loan. But the thing is, when you say, well, let's investigate for bias or investigate for inaccuracy, there is a limit to what we can do because we will never have the counterfactual. This is not a randomized control trial. And you still have the problem that without clear criteria you can't make it a contestable process and you can't give the person the conditions under which they would get a different response, which seems like an important thing to do. We are building systems that are very Kafkaesque that are impossible to navigate. And I don't know if you've had this experience in which they are becoming so alienating and so Kafkaesque that people start having magical thinking about the algorithm, attributing it beliefs and trying to figure out what it wants. And this is something that the philosopher Hannah Arendt warned about, because back in the 1930s, there was something similar with very opaque bureaucracies that were very random. And what it does to people is it creates a sense of alienation, a sense of not being able to understand the rules by which you are ruled. And that is incredibly toxic for human psychology.
Host (possibly a co-host or interviewer)
You know, it is interesting because sometimes you do really get the bad outcome here. I think this is a real thing. There was a tweet over the weekend that somebody told JetBlue that they have a $230 increase in a ticket after one day. And that's crazy. And they're just trying to make it to a funeral. And the JetBlue account says, Try clearing your cash and cookies or booking with incognito window. We're sorry for your loss. You're right that sometimes these algorithms. I mean, there are times where they just clearly break down and they do become Kafkaesque or just, like, really tough to navigate.
Carissa Veliz
And so many times there's no one to complain to. There's no one who can understand you, who can fix a mistake. It's just a machinery.
Host (possibly a co-host or interviewer)
That's right. Well, I mean, I think this really sort of gets to. It gets to one of the tougher parts of this, which is that there's a lot of AI. AI is being used here, whether it's predictive AI or whether it's generative AI. And a lot of this stuff will make decisions, and you just have no idea where the decisions are being made. I mean, within AI, there's this large field, probably not large enough, but it's a large field called interpretability. And it's like, yep, the whole job is trying to figure out how the generative AI systems work. And it's like, you're putting this out there, people are relying on them, and then, oh, I mean, I guess. I don't know. And along the way, you're trying to, like, figure out how they work. You're trying to interpret them. Like, isn't that backwards?
Carissa Veliz
Yeah, it is. And something really interesting, it's a bit of a metaphor. So I'm not saying it's exactly the same thing, but we can really learn a lot from ancient Greece and ancient Rome because our current, you know, we started this conversation by just pointing out how much we're relying on prediction. And we've always relied on prediction, but I think there are times in history when that goes up and goes down. I think this is a peak. And another peak was ancient Greece and ancient Rome. And if you were to interview an ancient Greek person and say, what do you think about the Oracle of Delphi? They would say, oh, it's cutting edge technology. It's the best we have to make decisions. And how does it work? Well, we're trying to interpret it. Right. And the same thing with astrology. It was a very technical thing about how to read the stars, how to measure the distance between the stars. So in a way, we've seen this before. Even though the technology is different, the political role is actually quite similar.
Host (possibly a co-host or interviewer)
Okay, but this one also. All right, the Oracle of Delphi, right? The Oracle of Delphi didn't know anything. I mean, it's a story, right? But like, let's say you have an oracle back in the day, there'll be a great famine. They're pulling. That's total bullshit. They don't know what they're saying. But an AI system can actually predict that there will be a famine. Let me give you an example where prediction could be really good. All right, Google is, you know, say what you will about Google, one of the things that they're really working hard on in Google research is flood prediction, which we know like kills way too many people because it's totally preventable. That's not, you know, do we know every single thing about how these machine learning algorithms make these predictions? Maybe not similar to the way that we didn't know when an oracle was going to make the prediction. But in the real world, we can tell whether they're accurate or not. And they have been accurate and they have saved people's lives. That, to me, is a great form of prediction that AI can help us with. Now, is this something in full disclosure that Google holds up and says, look how good our AI is. Look over here. Well, you don't look at the rest of the. Yes, but it doesn't change the fact that that's, I think, an undisputed good.
Carissa Veliz
And this is part of why it's so important to have this conversation, which astonishingly we haven't had before as a society about. Sure, there are kinds of predictions that are very good, like weather prediction. I Look at my app every single day, multiple times a day, and I will continue to do so. But then there are other kinds of predictions that are clearly very problematic. And the interesting thing is that there's no formula. There's no way to say, okay, if you check this box, this box and this box, then it's fine. It's a public debate that we need to have and that's why it's so important with Google. I haven't looked at the flawed thing, but let's say that's correct. That doesn't mean that every kind of prediction that Google does is equally valid. So one very fun example is, well, not fun, but interesting example, is when Google tried to predict fluscan and pandemic type event and it tried for years and years and years, it increased the complexity, it increased the data and it could never do it. And eventually it shut down partly because it was relying on people, searches, people doing searches. And when you search for symptoms, sometimes you search for symptoms because you're having the symptoms. Sometimes you search for symptoms because your sibling is having the symptom or because you're worried you might have them. And so it was too confusing and they couldn't do it. And again, even though there is no checkbox and no easy way to tell which predictions are acceptable and which are unacceptable, one thing to take into consideration is is this a prediction about a thing, flawed things, or about something more social?
Host (possibly a co-host or interviewer)
Yeah, well, let's go back to the pandemic example. So that's the first time learning about this Google example. But there are other versions of, you know, prediction, AI based prediction that are helpful when it comes to pandemic. Wastewater analytics for instance, is really interesting where like there are companies, we've had them on the show actually that can, let's just take the COVID example. They see how much Covid or how much virus there is in the wastewater and then they look at the rate at which it's advancing and then they can actually predict a spike that could free people. Because if there's like, no, if there's no prediction involved in when the spike is going to be, the answer might be locked down, everybody. The other side of it is if you can predict that there's going to be a spike, you can be selective in when you want to shut things down versus open them up.
Carissa Veliz
Yeah, One important thing is the closer you are to the present, the more likely your prediction is reasonable. So if you hear somebody predicting what's going to happen in a thousand years, take it with a big, big Pinch of salt or in fact, just kind of laugh it off.
Host (possibly a co-host or interviewer)
Now the people that come on this show, they won't predict like a, into the future because this AI world is changing so fast. But yeah, a thousand is.
Carissa Veliz
But like long termists in effective altruism are thinking about the world a thousand years from now.
Host (possibly a co-host or interviewer)
We gotta.
Carissa Veliz
Yeah, or, you know, some people are thinking about the world in 50 years or 25 years. So the more you ground yourself in the present, like if you see the analytics of the wastewater right now, that is very useful information. And depending on how much you know about the virus and how much experience we've had, you might be able to make some useful predictions. Now that doesn't mean that we will be able to predict the next pandemic. It might be a virus that we've never seen before and we don't know how it behaves. And one very important kind of warning is beware of people who will promise a prediction in exchange for huge surveillance. Because the price to pay for mass surveillance is a police state. It leads to authoritarianism. And so often we're willing to, to surrender our privacy on promises that are never kept, that are very problematic even if they could keep it. And we're sort of selling our democracy.
Host (possibly a co-host or interviewer)
Okay, but we need an example here. So like, where is the surveillance happening that leads? Where are these trade offs?
Carissa Veliz
So I used to live in New York City and I hadn't been in the city for a while and I've noticed how there are many more cameras than when I used to live here.
Host (possibly a co-host or interviewer)
Right.
Carissa Veliz
And many people claim that, well, we need the surveillance for safety. The more surveillance we have, the more safe we are. But that is empirically inaccurate. So the safest countries in the world are not the most surveilled ones. So Spain is one example. It has some of the lower statistics for any kind of crime, including homicide and so on, violent crimes. And it's not better surveilled than the US or the uk. And in fact the UK is the country in Europe that is most surveilled and that has more crime. And so that's one example. But it's important because when you have protests, and in particular a peaceful protest, it's very important to have anonymity. That is one of the bedrocks of democracy. And when you have cameras all over the place, and now with facial recognition being so easy to use, you are eroding one of the most important tools in the toolbox for democracy.
Host (possibly a co-host or interviewer)
I have so many questions about this. I mean, first of all, I'LL just say you've been to China.
Carissa Veliz
I've read a lot about China.
Host (possibly a co-host or interviewer)
Okay. I was in Beijing for a day. But that was enough to see the level of surveillance there. A lot of cameras. Now, there is a feeling that society is safe, but it's not a society I'd want to live in.
Carissa Veliz
Exactly.
Host (possibly a co-host or interviewer)
But there is some sort of spectrum there where you probably do want some cameras up so you can, like, for instance, a security camera in some areas. That's good. Right. Without any video footage, you probably solve less crimes. So isn't it a matter of, like, finding where on the spectrum you would live, you want to live?
Carissa Veliz
Yes, but I think we're getting it very wrong. I think that the practical question we're asking by having this amount of surveillance is how much surveillance can liberal democracy take? And I'm afraid that we might find out. And I don't want to find out because I don't want to live in China either. And the illusion of a world without crime ignores the fact that that would be a world very full of a very different kind of crime. Authoritarianism. Exactly.
Host (possibly a co-host or interviewer)
Yeah, that is a problem. So talk a little bit then about how generative AI sort of comes into this. Because, I mean, I think we kind of hinted at it at the beginning and then sort of went to this earlier version of machine learning. That's everywhere. But there is this now there's a trust towards chatbots. And, yeah, you can really. I mean, you can really steer your life towards different outcomes based off of what ChatGPT tells you. And it's probably worth at least thinking about that before diving headfirst like I often do.
Carissa Veliz
Absolutely. And maybe just to end the previous topic, surveillance is important because the whole machinery of surveillance is there to feed the machinery of prediction. So these two machineries are intimately related, and that's why it matters.
Host (possibly a co-host or interviewer)
But we're not living in, like, Minority Report, though.
Carissa Veliz
We seem to be walking in that direction. And I would like for us to walk in a different direction.
Host (possibly a co-host or interviewer)
I mean, but we're. Let's just talk it through. We're not like, arresting people on crimes they may commit, are we?
Carissa Veliz
No, but we're using predictive algorithms in the justice system for sentencing, for many aspects in the justice system and for the reasons that we explored with insurance or with loans or with jobs. That's very problematic.
Host (possibly a co-host or interviewer)
We talk a little bit about how those predictive algorithms are used in the justice system, and then we're going to get to this generative AI question. But you can't take this in interest?
Carissa Veliz
Well, it depends on the place. They vary a lot. But some algorithms are used to assess the risk of a person committing a crime and on the basis of that, deciding whether they might get bail, for example, parole. Parole.
Host (possibly a co-host or interviewer)
All these things.
Carissa Veliz
All these things. Another case that worries me, that I think people are less aware of is whether, for example, an insurance company decides to cover a lawsuit, because they only cover a lawsuit if the case has a 51% chance of succeeding. And that makes sense in some ways. You can see the rationale behind that. But at the same time, when we make the justice system about probabilities, we're losing its principled approach. And so you make it very easy for the bad guys to get away with it because you don't have to make it impossible for people to challenge you or even very hard. You just have to make it slightly unlikely for them to win, and then you get scot free. So there are all kinds of distortions of justice when we introduce probabilistic thinking into an area that I think should be more based on principle.
Host (possibly a co-host or interviewer)
Okay, I got one more for you. I just want to hear you talk it through why there's a right to privacy. If you protest, I'll tell you what my fear is, all right? And it's good to just talk it through. If you end up having. Now I'll say something negative about algorithms. We have a world where algorithms drive people to extreme positions. The more extreme you are likely you are to get play in the algorithm. And part of that is anonymity. Right. You can say these things as trial balloons with anonymity and sort of see how people respond to them and then double down. And I think one of the fears with anonymous protests, and I'm just talking it through. I'm not taking a position here, but it takes some of those online dynamics and brings them into the physical world where if you're unidentified, the temptation to move to the extreme or the ability to move to the extreme is further and further. I believe in free speech, but I also think incentives matter. So talk through what you think about this one.
Carissa Veliz
Absolutely. I have a paper called Online Masquerade, which I'm going to send to you. And the gist of it is that even though it's very intuitive to think that way, when you look at the empirical data, it shows that people who are identified online tend to be more aggressive and then they tend to be more followed and more successful in that aggression. And, you know, we see this in the public's domain. We know important politicians who put their name on things and who say very outrageous things. And it works. And so anonymity is not necessarily leading to more aggression or more toxicity. The second thing is that if you're in the public square and you're protesting, and let's say you're protesting peacefully and there's one person who is aggressive or who is showing something, who does something illegal, then of course the police can always arrest them. But we don't need to have mass surveillance in order to have that, to have accountability. We didn't have mass surveillance a few decades ago.
Host (possibly a co-host or interviewer)
Right, but I'm not saying the mass surveillance. I'm just saying the anonymity part. Like if everybody. If everybody protests in a mask, you know, doesn't that you think that leads to better outcomes than if. Than if they don't?
Carissa Veliz
Well, we shouldn't need a mask because we shouldn't have this kind of surveillance
Host (possibly a co-host or interviewer)
that identifies us a product of the mask.
Carissa Veliz
Yeah, exactly. But like, even if somebody wear masks and, you know, they break a glass or whatever, then have the police arrest them and take off the mask, you know?
Host (possibly a co-host or interviewer)
Right, but you can't. I mean. Okay, I'm just gonna. I'll let that sit. I don't want to spend the whole day debating this, but it's interesting to hear you talk about it. All right, now talk about the generative AI side, finally.
Carissa Veliz
Yes.
Alex
So if we have these systems of
Host (possibly a co-host or interviewer)
prediction in our world, I mean, again, like people who are building genai tools, they care very much about prediction, predicting the next word, predicting outcomes. And once they can predict outcomes, then their agents can take the next step. Where is that leading?
Carissa Veliz
Yes. So some authors make this distinction between predictive AI and generative AI, and I am not sure it makes sense because both kinds of AI are essentially predictive. We might use them differently and they might look differently, but essentially they're both machine learning. And what machine learning broadly does is it has some data and it projects data that it doesn't have based on data that it does have, roughly whether it's predicting the next word or predicting whether somebody's going to be a good employee or not. In the case of generative AI, it's fascinating. I don't know where to start. It's fascinating how it got trained. I mean, that's one thing with copyrighted material, with personal data. So that's one kind of thing. We can just park it. But just notice in the way it works, it's a very sycophantic system, as we know. It likes to please people because that's the way it gets you to be engaged. And so it will tell you things like, oh, that's a brilliant idea, and it will continually validate you. And they were designed to do that. They were designed to make people feel satisfied instead of being designed for another thing. For example, for being truth tracking, which would be much more useful if, say, you're a researcher. And I think sometimes we lose sight of that. And one way to put it is in the philosophy world, and there was this philosopher called Harry Frankfurt who wrote a book called Bullshit on Bullshit. And Frankfurt says that bullshit is very dangerous for democracy because the truth teller and the liar are playing the same game on opposite side of the court. But the liar has to know what the truth is in order to lie and care about the truth. The bullshitter doesn't care about the rules of the game. They're not playing the game at all. And that's very toxic for democracy because it's very hard to have a debate or a conversation with someone who doesn't care about the truth, who will say anything to just have the kind of reaction they want with no regard for the truth. And that's essentially what a large language model is. It has no regard for the truth. It wants to please you. If what pleases you happens to be true, great. But if it's not true, then it doesn't care one way or another.
Host (possibly a co-host or interviewer)
But is that true? Because, I mean, the labs have done a lot of work to ground these models in truth. And in fact, if it was a bullshitter, the way that you would explain, there would be very little economic value. But we can see now that there's real economic value.
Carissa Veliz
We don't know whether there's real economic value. The jury's still out on that. But yes, the labs have done more.
Alex
You don't think.
Host (possibly a co-host or interviewer)
I mean, I guess, like, it seems to me like we're past that point where there's a real question here. Now, maybe it's not going to be broad economic value in a way that makes the boom appear justified. But if you think, look at places like coating, like there are areas where we can see today that there is definite real economic value there.
Carissa Veliz
I don't know. I'm not saying there isn't. I don't know. Because sometimes these systems create mistakes that then are very expensive to fix. And it's not easy to make the calculation of whether we are getting economic value. There was a paper recently at the Harvard Business Review that suggested that even when people think they're being more productive with AI, when you have researchers look at it, they're being less productive because they're spending a lot of time fixing what the AI gets wrong and not noticing that. So I don't know, maybe we do, but I'm not, as it's not crystal clear to me.
Host (possibly a co-host or interviewer)
Okay, but even if we do, it's nice to have somebody with a different perspective here who shouldn't have the same people all believing the same thing.
Carissa Veliz
No, of course, and I grant that. I don't know, I'm not just saying something, I just don't know. But even if they do, where were we?
Host (possibly a co-host or interviewer)
I mean, this is really the key question about Genais, where your argument is a bullshitter and I will just throw out there. And this is something I really do believe. These companies are spending lots of hours and dollars trying to ground these models in reality, because if you do that, they become much more useful and they've become much better at it over time.
Carissa Veliz
Absolutely. But the interesting thing is the way they've become much better has been by getting away from this probabilistic and statistical thinking. So, for example, when you start chatting to a chatbot and then they realize that what you're looking for is for something, say in a manual, in a PDF, then they refer to the PDF and that's how they ground themselves in reality. Or when they realize you want a calculation, then they plug into a calculator, because these systems cannot calculate, as we know. And so that's interesting that the way to make it better is to move away from this probabilistic thinking. So part of my criticism is not about AI, or any kind of AI, but about prediction, about how we're using prediction and how naive we've been about using prediction. And I think if these systems had been designed differently from the start, they would have needed less patches. And how do we think about this going forward so that we design systems from the start to be truth tracking rather than fundamentally about engaging people for profit.
Host (possibly a co-host or interviewer)
But how impressive is it that they know? Okay, actually my knowledge actually stops here. I should use the calculator or I should actually go look in the PDF. I would say the argument that the model makers would make is you can't have the tool calling before you have the base model. And it took a couple of years for these base models to get smart enough to know when to call those tools.
Carissa Veliz
That sounds great and I'm on board.
Host (possibly a co-host or interviewer)
Okay.
Carissa Veliz
But in practice, they're still not quite there. So, for example, I'll give you a very recent example. It's weeks Old. If you ask one of these chatbots, I have a box and I'm going to put two bunnies in it. And then five months later, I take five bunnies out, how many bunnies are there? And it will say, minus three bunnies. So they still don't have enough understanding to always figure out what they need. Right. In this case, they might have gone to a calculator and that wasn't appropriate because they don't understand that bunnies can reproduce. So, yes, with nuance.
Host (possibly a co-host or interviewer)
Yeah. I mean, there are people, examples of people asking the most advanced models, like, how many P's are there in Strawberry? And it's used to being asked, how many Rs are there in Strawberry? And it gets it wrong.
Carissa Veliz
Exactly.
Host (possibly a co-host or interviewer)
Let's just end this segment. We do need to go to a break, but let's just end this segment sort of with your broad thesis here, which is that like we is it. And you tell me if this is the right way to encapsulate it. We live in a world where there's a lot of prediction, more prediction around us all the time. Prediction in the AI models, prediction that's influencing the jobs we get, whether we get a loan, all these things. And rather than just take this notion of prediction for granted, we should probably pay attention to the nature of those predictions themselves. Is that sort of what you're saying?
Carissa Veliz
Yeah, exactly. Because predictions can be weapons of power. They can be power plays in the skies, and we need to be less naive and smarter about them.
Host (possibly a co-host or interviewer)
Okay, well, that is all being put on steroids in these prediction markets because oftentimes you'll see a prediction in a prediction market. And the question is, is that somebody manifesting an outcome? Is it someone with direct knowledge of an outcome, or is it actually just a market for what might happen? We'll cover that when we come back right after this.
Alex
Look, if you have a kid in school right now, you know the drill. What you take 20 minutes of homework, ends up taking two hours and usually ends in tears. And every good tutor, well, they're fully booked for months. This episode is brought to you by Brainly. Brainly is an AI powered personal tutor built by educators, not a general purpose chatbot. It doesn't just give your kid the answer. It walks them through step by step explanations so they actually understand the material. It learns how your child learns, diagnoses when they're struggling, and builds a personalized learning path in under three minutes. Available 24 7. There's no scheduling headaches and it's just a fraction of the cost of a private tutor. Finals are coming. Build your teen's study plan now.
Host (possibly a co-host or interviewer)
It only takes minutes.
Alex
Go to brainly.com bigtech to get 50% off your first Brainly subscription with my code Big Tech that's b r a
Host (possibly a co-host or interviewer)
I-n l-y.com BigTech Most leaders know how
Alex
work is supposed to happen, but when it comes to how it actually gets done day to day across tools, teams and handoffs, they're mostly guessing. That's exactly the problem Scribe Optimize was built to solve. Trusted by over 80,000 enterprises, including nearly half of the Fortune 500, it gives leaders a live view into how work is really happening across approved business apps without interviews, manual process mapping, or extra effort from the team. And because it's continuously analyzing real workflow activity, the insights stay current instead of going stale the moment a process changes. You can see which workflows are happening, where time is going, and which tools are involved. It automatically surfaces top issues, explains why they're happening, and even recommends ways to fix them with estimated time savings. And importantly, it's built with privacy in mind, so activity is only captured in admin approved business apps and user level data is anonymized by default. The kind of visibility that used to take months. Now it's just always on. If you're ready to stop guessing and start seeing, Visit Scribe How BigTech that's S C R I B E How
Progressive Insurance Announcer
BigTech Insurance isn't one size fits all, and shopping for it shouldn't feel like squeezing into something that just doesn't fit. That's why drivers have enjoyed Progressive's Name your price tool for years. With the Name your price tool, you tell them what you want to pay and they show you options that fit your budget enough. Hunting for discounts, trying to calculate rates, and tinkering with coverages. Maybe you're picking out your very first policy, or maybe you're just looking for something that works better for you and your family. Either way, they make it simple to see your no guesswork, no surprises. Ready to see how easy and fun shopping for car insurance can be? Visit progressive.com and give the name your price tool a try. Take the stress out of shopping and find coverage that fits your life on your terms. Progressive Casualty Insurance Company and affiliates Price and coverage match limited by state law
Host (possibly a co-host or interviewer)
and we're back here on Big Technology Podcast with Carissa Feliz. She's an Oxford philosopher and the author of Prophecy, Prediction Power, and the Fight for the Future. From Ancient Oracles to AI Great title So what do you think about prediction markets, Carissa?
Carissa Veliz
They scare me.
Host (possibly a co-host or interviewer)
Okay. That is after our first half conversation. I'm not surprised. What particularly about them scare you?
Carissa Veliz
So the argument for having them is that they can be a source of knowledge. Right. When people bet with their own money, and if they get it wrong, they lose. They'll try to get it right. And when you have a lot of people placing bets, in theory we can harness the wisdom of the crowds. All of which sounds great, but it assumes that prediction is a kind of quest for knowledge. And it doesn't consider that sometimes prediction is a quest for power. So for example, if you want to influence public perception and you have enough money, you can bet heavily on something or someone to make it look more popular. And we already have examples of politicians betting on themselves.
Host (possibly a co-host or interviewer)
Great use of campaign funds.
Carissa Veliz
Yeah, exactly.
Alex
Make it look inevitable.
Host (possibly a co-host or interviewer)
That's what every campaign tries to do.
Carissa Veliz
Exactly. And when you start having these prediction markets have deals with newspapers in which newspapers are reporting on the prediction as if it was a fact, then it gets to be really smart way to invest your campaign funds. Another example which is concerning are ones in which there are many ways to make a prediction come true. And one of those ways is to make it come true after the fact. I don't know if you read that there was a case in which an Israeli journalist had reported about a strike in the conflict and some people bullied him to try to get to change his story because they stood to win $900,000 from a bet.
Host (possibly a co-host or interviewer)
Yeah. It's like fantasy sports.
Carissa Veliz
Yeah. Another case that is concerning is six anonymous accounts earned $1.2 million on a prediction market betting for the attack on Iran. And some of those wallets were funded hours before, which suggests that they might have had insider information. And if they had insider information, did that conflict of interest lead to a different kind of decision? Another concerning case is cases in which an adversary might be using those prediction markets to inform their own tactics. And so it might change conflict itself. And even when there isn't any bad player, even when it's just people who are well intentioned, I worry that many people thinking that there's going to be a war makes it much more likely for there to be a war because the other country interprets it as a threat. And then they escalate and then we escalate in response and suddenly it's a spiral and that nobody wants it to happen. But our expectations shape the future.
Host (possibly a co-host or interviewer)
Why do you think people are so enthralled by these markets? I mean they're having a moment because they've been accurate and I think better than the polls in some cases. But just the outsized attention and interest in them is very interesting. What do you think is binding?
Carissa Veliz
I don't know.
Host (possibly a co-host or interviewer)
You're a philosopher.
Carissa Veliz
I'm a philosopher. I don't know. These are hypotheses. But one hypothesis is that we have truly become so accustomed to thinking in these betting terms that we are exporting that kind of mentality to more and more spheres of life. And I think that's a very bad thing. It's also has to do with gamifying life. And there's something to me very disturbing about standing to earn money from a bet in which if you win, somebody's going to suffer greatly, like in the case of a war or something like that. Because you might say, well, prediction markets aren't that different to the stock market. Right. It's also a kind of bet. But the stock market, when you invest in a company, you're actually contributing capital to that company in a way that is an important contribution to society. Whereas the prediction market is just a bet. And the only value they might have is if they're accurate. But accurate at what price and accurate in what sense and accurate when. And there's a lot of noise. And even if in one instance you might say, oh, in this case the prediction markets were more accurate. Well, what does that mean? What does that really mean? And it doesn't nullify all of the other problems. We don't want to gamify everything, but I think maybe another, another reason why they're so popular is because there is this general sense of that we're living through times of high uncertainty and that is, you know, leads people to be anxious, I can feel it as well. But I would like to invite people when they feel that anxiety about uncertainty to realize that uncertainty is actually good news. Because it means that the future is not written. And that means that we can intercede in it, that we can influence it. And that is the great news. If you knew exactly what was going to happen tomorrow, you'd probably live in a police state.
Host (possibly a co-host or interviewer)
Yeah, but then, I mean, even if there's a prediction market out there, you could probably also intercede. I don't think you have to give up. Same thing with political polls. Right. You could say the same thing about political polls as the prediction markets, that they become self fulfilling prophecies. Because they do in many cases.
Carissa Veliz
Yeah. And why do we do political polls? In a way, we do it for entertainment and Is that good enough? Because I'm not sure it's good. Another reason might be, well, you might be well informed. You might want to be well informed because depending on how things are going, you might vote one way or another. Right. Tactical voting. But I'm not sure we should incentivize people to be tactical voters. The ideal democracy, I think, is one in which people vote according to their conscience and that says more about what they want, and that is more democratic. It seems to me that when we push people to think tactically.
Host (possibly a co-host or interviewer)
Yeah. I don't think I'm gonna stand on the table for political polls.
Carissa Veliz
Okay.
Host (possibly a co-host or interviewer)
They kind of annoy me.
Carissa Veliz
Also fair.
Host (possibly a co-host or interviewer)
All right, let's end with this. I mean, you have a perspective that you gotta use comedy in this era, and that's sort of a counterweight to some of these ills that you see. Talk a little bit more about that.
Carissa Veliz
Yeah, it's really funny because my first book, Privacy is Power, is kind of gloomy in a way, because at the time, everybody was so excited about tech and not seeing surveillance. And I felt that somebody. We needed a warning, more of a warning. But now it seems like we're in such a gloomy space in which so many people are making horrible predictions about the future. And I talk with my students, and sometimes I don't know if young people can even imagine a bright world. And if they can't even imagine it, how are we going to get to that kind of bright future that I wanted to emphasize? The good things that we have, and two very good things that we have and are very important resources and tools are. First, the analog world. Sometimes we forget about it. We are so dazzled by the digital that we forget the world of things. Of your favorite coffee shop and your favorite bar and the people you love and your dog and the ecological world and trees and rivers and to ground ourselves there and cherish and protect that. But the second thing is humor. And humor is quite important not only because it's a way to make life more fun and get through the hardest parts of life better, but it's also a very important tool in the toolkit of democracy. When you lose sense of humor, you're probably also losing some amount of freedom and democracy. And for example, Milan Kundera, the novelist, wrote a novel called the Joke, making exactly this point, given his experience with communism. And so the way that we. One way to confront all these gloomy predictions are first noticing that their predictions. Predictions are not facts. They can be defied and thinking, okay, is that the future I want. And if not, what am I going to do to create the future that I want to live in? But secondly, to treat it a little bit with less seriousness, I'm not saying I'd be mean or anything, but just, like, laugh a little bit about the absurdity of life. And humor is also a kind of intelligence. It's a kind of noticing the absurd and noticing what's off. And one example I give in the book is that of Seinfeld, because it's also about curtailing predictions. When something's funny, it surprises you in a certain way. Part of what makes a joke funny is that you're expecting something and then you get something else, and that makes it funny. And Seinfeld was brilliant at this, is brilliant at this. And the show is a very interesting case because it's exactly the opposite of what an algorithm would select. So the show was incredibly unsuccessful as a pilot. Focus groups thought that it was weak, and people didn't like it. It wasn't what people wanted to watch. So if we had had algorithms back then selecting what people want to watch, Seinfeld would have not been one of those cases. But there was one executive in NBC who really believed in the show and championed it. And the first few seasons were a bit successful. It had, like, a niche following, but it was still small. And then it took off. And part of why it took off is because it changed people's sensibilities. It changed our sense of humor. And that's part of what great comedy or great art or great literature does to us. It makes us look at the world different. And when we use prediction too heavily, when we only predict what's going to be successful based on what has been successful in the past, we are missing out on those innovations that will make us look at the world anew and different.
Host (possibly a co-host or interviewer)
Yeah. And to your point, the one thing that LLMs do the worst is humor. They cannot make jokes. And it's because, I think, like, as you point out, they're just used to the average of averages and not throwing curve balls.
Carissa Veliz
Exactly. And also because there's no one there. There's no one being irreverent towards power. And part of comedy is that it's like the court jester. What makes it so funny is that, you know, they are challenging the king in a way.
Host (possibly a co-host or interviewer)
And they are the king.
Carissa Veliz
And they are the king. Yeah.
Host (possibly a co-host or interviewer)
The book is Prophecy, Prediction, Power, and the Fight for the Future, from ancient oracles to AI Curtis of Lees. So great to have you.
Carissa Veliz
Thank you so much for coming on the show.
Host (possibly a co-host or interviewer)
This was fun.
Carissa Veliz
Thank you so much for having me, Alex. It was great.
Host (possibly a co-host or interviewer)
Awesome. All right, everybody. Thank you so much for listening and watching. We'll see you next time on big technology podcast.
Alex
Hey, he's here again.
Parkinson's Disease PSA Speaker
Oh, who hun?
Carissa Veliz
Sammy, the puppy I had when I was a kid. This is the second time he's seen Sammy.
Parkinson's Disease PSA Speaker
Could this be related to his Parkinson's? I don't see him, hon, but I know you do. About 50% of people with Parkinson's may experience hallucinations and or delusions over the course of the disease, seeing things that aren't real and believing things that aren't true. Symptoms generally worsen but are treatable. Learn more@moretaparkinson's.com and take the screener to see if it's time to start a conversation with your doctor.
Alex
Mountain View Equipment and Sunnyside Yatiene tractores Cayoti Listos paratroa.
Host (possibly a co-host or interviewer)
De seisanos, comprende?
Carissa Veliz
Mountain View Equipment.
Host (possibly a co-host or interviewer)
Travahamos Latiera Mountain View Equipment Contiguan Calapas. Restriction is applicant.
Host: Alex Kantrowitz
Guest: Carissa Véliz (Oxford Philosopher, Author of "Prophecy, Prediction, Power, and the Fight for the Future")
Date: April 22, 2026
This episode features a thoughtful, wide-ranging conversation between host Alex Kantrowitz and Carissa Véliz about the power, pitfalls, and pervasiveness of prediction in the age of AI. Anchored in Carissa’s new book, the discussion explores the philosophical and societal implications of living in a world increasingly run by predictive algorithms—from resumes and loans to weather and justice—and asks whether our obsession with prediction is blinding us to deeper issues of fairness, agency, and democracy.
For more on these themes, see Carissa Véliz’s book: "Prophecy, Prediction, Power, and the Fight for the Future.”