Transcript
A (0:00)
Good evening and welcome to the London School of Economics. My name is Richard Steinberg and I'm Chair in Operations Research here at the lse. It is my distinct pleasure to chair this evening's speaker, Professor Bruce Bueno de Mesquita, whose talk is entitled how to Predict the Future with Game Theory. This event is sponsored by LSE's Department of Management. Let me briefly review the proceedings for this evening. Professor Bueno de Mesquita will speak till around 7:27:30, at which time he will take questions from the audience. So please hold your questions until then. I should also mention that it is hoped that a podcast of this event will be made available online. Following the Q and A session, Professor Bueno de Mesquita has consented to hold a book signing outside the lecture Theater here. Professor Bueno de Mesquita is the Julius Silver professor of Politics at New York University and Senior Fellow at Stanford University's Hoover Institution. This evening's event celebrates the publication of Professor Bueno de Mesquita's book, Predictioneer, which is published by the Bodily Head. For those of you who think that there's not much more to game theory than the prisoner's dilemma, I promise you that you are in for a captivating surprise. And now, Professor Bueno de Mesquita on Predictioneer, how to predict the Future with Game Theory.
B (1:34)
Thank you. Well, with all those applause while I go on. So I'm going to predict that you're going to be a good and kind audience. And since I am a predictioneer, presumably that's going to turn out to be right. But. But we will see. So let me see if I can work out how to work things. I don't predict that. Well, there we go. Okay, so I got it. I'm slow, but I catch on. What I want to do tonight is sketch for you how to go about predicting things. And it will only be a sketch because I don't have that much time. And after I finish sketching things, I'm then going to go over some predictions, some of which will probably make you happy and some of which will probably make you rather unhappy. I'm just a guy who does logic and evidence. I don't do opinions. So let me get started. How do you go about making a decision? What's the process for planning a decision, whether it's in business, it's in government, it's in personal life, or what have you? Well, first of all, of course, you got to work out what is it you want to achieve, what are your objectives. And that is not my domain. That's the domain of people who make decisions. So I work that out in my life, of course. But the kind of modeling I'm going to talk about is not about deciding what to want. It is about deciding what to do in order to get as close to what you want as possible. So analysts look for what are the impediments, what are the things that get in the way of achieving what you want? And game theory is a method for assessing what gets in the way in the form of the interests of other people who don't want what you want, who would like to have something else happen in the world. And you can't wish them away. You've got to figure out either how to incentivize them to go along with something close to what you want, or how to make it costly for them to get in your way. Basically, the world reduces to those two simple choices. You give people rewards for doing what you would like, or punishment for or not. We'll try to work out a little bit more carefully what that looks like. Before I do that, I want to talk a little bit about what game theory can do, and very importantly, what it can't do and what it shouldn't do. It's not a panacea, it's not the solution to everything, but it is the solution to a lot of things. One of the most important things that a game theoretic analysis can bring to the table is transparency. If you ask an expert on a problem what will happen, they will express their assessment. And if you ask another expert on a problem, what will happen, they too will express an assessment, and it may very well be a different assessment. And if you ask them how did you arrive at that conclusion, they may or may not be able to tell you in a satisfactory way because they haven't written down the logic behind their reasoning process in an explicit way. But you can't solve a game without writing it down. And so the logic has to be transparent. That means that we can argue with the logic. We can question whether those are the right assumptions or we should be assuming something else. And because game theory is about people trying to do what they believe is in their best interest, we can find optimal strategies for people. That's what we all try to do in our lives. However much we may talk about being altruistic and being concerned about the welfare and others, I'm afraid I'm going to be very tough on that. I'm going to claim that we are all very narrowly Self interested. We are interested in the welfare of numero uno. And everything else is gloss. And I will elaborate on that later. Decision making is problematic because it is fraught with uncertainty and with risks and with. And games, of course, require you to model uncertainty and risk. And so they can help you to sort out what the uncertainties are, what the risks are. And they can help you sort out the possibility of exploiting uncertainty to your advantage. Now, we have a polite word for exploiting uncertainty. We call it bluffing, which means lying to people. And so in games, we can work out how often should you lie, how much should you lie, how should you lie, What's a good lie, what's not a good lie? One of the ways that we know whether a lie is good or not is, for example, to distinguish between cheap talk and credible commitments. So a cheap talk signal when you disagree with somebody is a claim that doesn't cost you anything to make and so shouldn't be taken seriously. It is what we call a babbling equilibrium. It's just as if the person was standing there saying, blah, blah, blah, blah, blah. There's no meaning to what they're saying. For example, I teach an introductory undergraduate course in international relations. My students came to class several months ago very excited after the North Koreans had engaged in some nasty behavior on the nuclear front. President Obama gave a speech announcing that there would be dire consequences for the actions that Kim Jong Il and his regime had taken in violating the agreement that they had signed just a couple of years before. And my students came to class very, wow, that was cheap talk, wasn't it? What possible dire consequences could there be short of the United States invading North Korea, which wasn't about to happen. President Obama said, we will impose economic sanctions on North Korea. The United States does not trade with North Korea. So what would these economic sanctions be? It was cheap talk. My students understood that what we want to look for is credible signals, costly signals, things that people say that don't just cost you something, but cost them something to say. That's how we know that we can begin to take seriously what they are declaring. And that's one of the things that these sorts of models look for so that we can work out, is the person just babbling, or is the person likely to be telling the truth? How much can we raise those costs to find out where their breaking point is? And so forth. Okay, so those are things that game theory is very helpful for. It can help you to engineer outcomes. But we shouldn't get confused just because you solve A game doesn't mean that you can get what you want. Other people have interests. They're also solving the game and they have clout. And you can't take that away from them. You can't wish it away from them. If you are dealt lousy cards, you may be able to play those cards optimally, but they're still lousy cards. Kim Jong Il has been dealt lousy cards. He plays them very well. But still there's just so much that he can do, and there's just so much that anybody can do. Game theory can't make that go away. And there are things that game theory ignores. I say this with a parenthetical remark. Game theory ignores emotion, except when people use emotion strategically. Before the talk, we were talking about a colleague of mine at New York University, Stephen Bramps, who has written on the strategic use of emotion. But most of the time when people think about emotion, they're thinking about raw emotion, reacting at the instant of anger or frustration or whatever. And I'm going to contend that while emotion is very important, it is much, much, much, much, much less important than you think it is. And I hope to offer evidence for that. What shouldn't game theory do? It should not substitute for good judgment. But let's be clear here. What we think of as good judgment is wise decision making. We only know whether a person was wise in their decisions after the fact. That is, if things turned out well, they were wise, but if things turned out badly, they weren't wise. It's hard to know before the fact who has wisdom. And even if we know who has wisdom, maybe because they have a track record of wisdom, there's a big problem with wisdom that game theory doesn't have. The big problem with wisdom is you can't teach it to people. You can't make somebody else wise just because you are wise. But you can teach people to do rigorous, transparent game, theoretical or other forms of analysis. And that means that while you may not be able to substitute for the deep thoughts of a wise person, you don't need to have a wise person. You can't count on having wise people. You can have some good, well trained analysts who could do just as well, maybe in fact even do better. And finally, game theory should not be no model, no bundle of equations should ever substitute for smart internal debate about issues. But game theory should inform debate. When I talk about Iran, I will illustrate that with a very concrete example. But basically, because game theoretic reasoning is transparent, game theory, right, being just a way of thinking about how people interact strategically because it's transparent. If you come to a conclusion and my model comes to a different conclusion and we're looking at the same data, we can have a sensible conversation because we can ask the question how did you arrive at your decision different from my decision? The model I'm going to talk about, for example, regularly disagrees with my opinion about things. Even when I'm the expert who provides it with data, it disagrees with me. And I'm sad to say because I'm not very wise. It turns out to be right much more often than I do. Okay, so game theory starts with a few very basic assumptions. Isn't it fortunate that the nose is large enough to accommodate the equations there planned ahead? So people are assumed to be rationally self interested. What does that mean? It does not mean that they can foresee all developments. It does not mean that they look at every possible alternative that they could pursue in trying to solve a problem. Indeed, people who would do that, if such people exist, would be irrational because clearly when the benefit is exceeded by the cost of continued search, it's no longer rational to keep searching. They are people who do what they think is in their own best interest. That's a very straightforward condition. How do they determine what's in their interest? They have values, things they want. Those are outside the realm of explanation. And the sort of work that I do, I take those as given. There are things people want and they have beliefs. They have beliefs, for example, about how other people will react to what they want, how other people will compete with them or cooperate with them. And they choose their actions, taking those values and those beliefs into account. Now those beliefs force people to confront the strategic reality that they face impediments to what they want. It's clear that I know who among all the people in the world who meet the constitutional requirements to run for President of the United States, who has the values that most match my own. But I don't vote for that person. It's me. Nobody wants what I want more than I want what I want. But I know I have no chance of getting elected. So I have to think about, well, who might be next best. Or in my case, I have to get pretty far down because I generally don't agree with any of the candidates and find somebody with whom I feel closest affinity. These are constraints that we have to overcome. So who's rational and who isn't. Mother Teresa rational. And if you read Predictioneer, you can have the pleasure of seeing me slam Mother Teresa as a narrow Self interested individual who after all could have lived her life like most nuns do, doing good deeds anonymously. But no, she chose to have a branded sari, white blue trim so people would recognize her leather sandals. She did a lot of things to draw attention to herself. Suicide bombers, terrorists. Rational. Maybe we'll get questions on that later. I also explained why they are rational and how they are incentive driven. Pretty much I think everybody in this room is likely to be rational. I only know two types of people who I would say are not rational. Two year olds. Because two year olds have not yet formed firm preferences. So one minute they want chocolate ice cream and as soon as you hand them chocolate ice cream they want strawberry. Okay, you switch to strawberry. No, no, no, I want chocolate. That's not rational. Because there are stable preferences. And schizophrenics because schizophrenics seem wired to not be able to have stable preferences. I don't deny that 2 year olds exist and I don't deny that schizophrenics exist. But for the problems that I study, it's not likely that 2 year olds or schizophrenics get to make decisions. So pretty much everybody in the world that I study gonna say is rational. Okay, how do we go about modeling a problem? So there are immediately problems that people have in thinking about issues. So we know that there are people who influence issues. For example there is Gordon Brown or the CEO of a corporation. These are people with a lot of say they have a lot of influence. Let's take Gordon Brown, let's take President Obama, either one of them, they need to formulate policy towards Iran's nuclear program. Let's face it, I mean no offense, no disrespect to either my president or your prime minister. They don't know much about Iran. They probably can find it on a map. But maybe they know the difference between Shia and Sunni Islam. But these are not experts on Iran. So they have advisors, they have a foreign minister, they have various people who focus on national security issues. And most of the senior people who speak to the prime minister about Iran, let's be honest, they don't know a lot about Iran either. They have advisors. Those advisors probably know something about Iran. So when we think about whoever, it's his decisions. We need not to just focus on the key decision makers, something that most people do for good reasons, which I will come to. We need to focus on everybody who will try to shape the decision. The decision makers, their advisors, lobbyists, interest groups, people who will organize and Demonstrate on the streets. Anybody who tries to shape decisions should be paid attention to. Now, if we have a very simple problem with just five decision makers, Harry, Jane, Sally, George and John, there Harry wants to think about how is the best way to interact with Sally and with George and with Jane and with John, what should I do to try to persuade them? And each of them is thinking about that, about the other four. And Harry probably also is thinking, I would like to know what Jane is saying to Sally, George and John, because it might be that Jane is forming a coalition with those people. That would be a problem for me. And Harry probably even thinks, I wouldn't mind knowing what Jane thinks, Sally is saying to George and John and so forth. So pretty quickly the problem gets to be pretty complicated. As a matter of fact, although I can't draw enough of them there. With just five decision makers, there are potentially as many as 120 interactions, five factorial that are interesting to know about. Unfortunately, when I deal with big problems, my computer is too slow to deal with the factorial. So in this particular case I'd want to know about 60. Now suppose we move from 5 decision makers to 10. So it just doubled the number of people who could interact. But the number of interactions has gone from 120 to 3.6 million. Here is where the comparative advantage of a computer model comes in. A smart person probably can keep track of 120 interactions in their head. Nobody can keep track of 3.6 million. Now we don't really need to know the 3.6 million fine print. I like to know about 5760 of those. You can't keep track of that either, however. And most problems of important questions in the world involve many, many more influencers than just 10. So the number is exploding. So what do real decision makers do? They take intellectual shortcuts. They say, well, yeah, there are these 40 people who are trying to influence this decision, but it's these six who get to make it. We really should pay attention to them. That's where the influence lies. And much of the time that'll be right. They do pretty well. But a lot of the time it won't be right because those people are taking advice and being shaped by the views of other people who are being discounted, who are being overlooked. The computer doesn't have to overlook them. It's not as smart as we are, but it has close to perfect memory. It doesn't sleep, it has no union. It will work 24 hours a day if you ask it to. No coffee Breaks, no lunch break. Just crunch the numbers. Crunch the numbers so we can keep track of all of these interactions. And that means that we can look at a much more nuanced level of decision making than real decision makers often are able to do. Okay, we now get to a little bit of academic stuff. I apologize, but this is after all a university, so I thought I should at least very briefly show you that there is actual stuff behind this. So that ugly picture is the extensive form of one little piece of the game for one stage. One of the big differences between modeling the world to predict the future and engineer it and sitting down and writing pure theory models is in a pure theory model, you start out and you assume either the game will be played once, it will be played twice, so you can work backwards to what you should do now, or it will be played an infinite number of times. Infinity is a wonderful thing because it allows you to take advantage of all sorts of theorems about convergent number series and so forth. Great. But in the real world, when we play real games over real policy matters, whether in business or in government, we face the serious problem that we don't know how long the game will go on. When we try to put together a merger or an acquisition, we know that there will be conversations between the two sides, but we don't know how many conversations. When we try to resolve nuclear issues with Iran or North Korea or what have you, we know that there will be many discussions, but we don't know how many. So we need some way of modeling that. So here I have a semi myopic, semi short sighted game where people can only look one move ahead. This is one stage of the game. It will run for as many stages as the model concludes will be played. This is one stage of one game out of 16 times n squared minus N, N being the number of players of games that I'm going to solve to analyze a problem. Why 16 times? Because there's uncertainty. In this model, the uncertainty is on two dimensions for every player. I don't know when I start to interact with you, whether you're the kind of person who would like to settle this dispute between us by negotiating, or maybe you're the kind of person who thinks, you know, if I punch Bruce in the nose, he'll see the light and let me have what I want. So I don't know if you're a hawk or a dove. You also don't know that about me. I also don't know if I punch you in the nose to try to get you to do what I want. Whether you're the kind of person who will throw up your arms and say, oh you, you're serious about what you're demanding, I give in or you will punch me back, you'll retaliate. And I don't know that about you. So we have four degrees of uncertainty. So you take, you know, I'm uncertain about whether you are hawk or dove retaliator or not. You work this out. You don't know that about me. That's 16 different combinations of types or beliefs that we can have that we have to solve. So it very quickly gets to be a complicated problem. So I solve a model that looks something like that. Why in the world should anybody believe any claim I make about the value of such a model? Well, there is a track record. How often is this model right? It is said to be right 90% of the time. Who makes this claim? So I offer three sources. The Central Intelligence Agency in the United States has a declassified evaluation of the accuracy of this model applied to several thousand cases. They've concluded it's right about 90% of the time. You may not like the CIA, the Culinary Institute of America. They're very nice and making oh, different CIA. How about academics? There's an article in the British Journal of Political Science, 1996, I think that puts the accuracy rate also at about 90%. There's an article by journalists who have evaluated prior predictions that I've made in print, also put it at about 90%. And I've done something obnoxious to the naysayers out there who don't believe that game theory can help solve real world problems. The obnoxious thing I've done is over the last 30 years that I've been doing this. I have. It's not my main academic work, but from time to time I publish peer reviewed papers in journals making predictions about things that have not yet happened but are big important things so people can look at the record after the fact. That's what the academics have done. And see were the predictions right. There's a chapter in Prediction Year called Dare to be Embarrassed. And this is what I invite all the people who think that they have a better way of predicting to do. Dare to be embarrassed. It is incredibly easy to fit a bundle of facts to a known outcome. I can write down a statistical model to get really good fit if I know the value of the dependent variable. I can write down a case study to give a wonderful explanation of any event if I know how the event turned Out. So. So what I invite people to do is do that when you don't know how it's turned out yet. That's a real test. That's hard. And so I obnoxiously am willing to do that. Okay, so a little bit of embarrassment here, but what the heck, Might as well brag a little bit. So former Director of Central Intelligence James Woolsey says you shouldn't miss this if you care about understanding how decisions are made. Richard Lapthorne, chairman over here of Cable and Wireless. Nothing shimmy shammy or flip flop about it. It has intellectual rigor. No American would say that. By the way. That's a wonderful statement. Kenneth Arrow, Roger Myerson, both Nobel laureates in economics, and Roger, a game theorist. They say very nice things. So there's some reason to think if you don't like statistical evidence, like 90%, okay, we go with testimonials. We got some pretty fancy people who say this works. All right, one last little bit of academia here. Boring, but hard evidence. There's a table of some tests. You can see what the median error rate is with the Gayman Predictioneer. Compared to some standard methods of predicting median voter mean voter theorems, the model greatly outperforms them. All right, I bored you enough with that. What do you need to know to make successful prediction and to engineer outcomes in the world? Turns out you don't need to know a whole lot. You only need to know a little bit. You need to know who has a stake in shaping a decision. That's the influencers, the lobbyists, the interest groups, the decision makers. You need to know who they are. You need to make a list of them. And what do you need to know about them? You need to know four numbers. Only these four numbers, what do they say they want? Not what in their heart of hearts do they want? No way to know that. But what they say they want is a strategically chosen value. They made a calculation about how far out on the limb they should go, which is going to reflect a lot of things that we can work with about their characteristics. So what do they say they want? Which means we have to define an issue or set of issues. Issues are things that require actual decisions. How much do they prioritize the issue you're looking at? How important is it to them? How willing are they to drop what they're doing when the issue comes up and attend to it rather than something else that's on their plate? How much clout could they exercise if they chose to? How good are the cards that they're holding and how resolved are they? So I make a distinction between somebody who values reaching an agreement even if it's not the outcome they want, and somebody who is resolute in sticking to their position, even if it means not coming to an agreement and being defeated. Let me illustrate that with a very quick example. In my consulting life, I do a lot of work on litigation. Consider a mediator. A mediator doesn't care whether the plaintiffs or the defendants prevail. A mediator, self interested individual cares to shape an agreement because the next job for the mediator depends on the mediator being able to establish I am successful at resolving disputes. They'll take any outcome. They don't care what the outcome is. They just want to figure out what can I get these other people to agree to. The plaintiff, the defendant. They typically are pretty resolved. Not completely because they want to settle the case, but they want to settle it on their terms, if possible. So the mediator. I'll go with anything that works. The other side's more resolved. Okay, if we have those four variables, what do people say they want? How influential could they be? How focused are they? How resolved are they? We have those numerically. Then we can calculate with game structure what their choices are, what their chances of succeeding or failing are in different actions, what their values are based on their choices of position and what their beliefs are. And if we can work that out, then we can predict and engineer their behavior. Okay, so let me be obnoxious again. Let's notice what I have not said you need to know because I'm claiming 90% accuracy by knowing these things and I'm reporting that other people attribute 90% accuracy. I have not mentioned culture. I have not mentioned history and emotion and so forth. All of that stuff is great. Love it. I'm trained as a South Asianist. I speak poorly, but I speak Urdu. I read and write Urdu a little bit. I've been an area specialist. I know what it's like. All that expertise is great at getting you to understand the information that a model like this needs. But 90% accuracy without knowing that stuff, that stuff is fed into shaping what the data look like. But however you got there, you got there. Think about playing chess. If you walk in on two people playing a game of chess and you look at the board, you can pretty quickly work out for whoever has the next move what's good move for that person to make. You don't know the history of the game. You don't know how the board got the way it is. You don't know their culture. You know, they both want to win the game. And you're looking at the board and from this moment forward, what's the best move? That's the ball game. Okay, so where can you get the kind of information that I'm talking about? You can get it basically from two sources. If you are an expert on a problem, you know this information. Indeed. Think about it. How could you be an expert and not know this? Notice I'm not going to ask an expert. What do you think is going to happen? This is not some Delphi method. I don't even ask myself, what do you think is going to happen? Just four numbers about each player.
