C (23:13)
Think about in the extreme, all the ancient cares about is how much time and effort he puts into the principal's interest. And the principal wants him to do two things. Now if that's the case, you have to pay him the same at the margin for performance on both dimensions because otherwise he'll just do the thing that's better paid. You know, if you'll pay me in effect $2 an hour for doing one job and $3 an hour for doing the other, and I don't care what I'm doing I'm going to spend all my time on the $3 an hour job. So that means that you have to give balanced incentives. And that's true even when it's not this extreme case. There's got to be balance in the incentives. Now it's quite possible you'd have very good measures for some things, but poor ones for others. So for example, in the building, in the sale, delivering current sales and building customer relations, or the other thing might be building current sales and bringing back knowledge from customers about their needs, you get pretty good measures on the sales part and on how hard and cleverly the salesperson worked at that job. But measuring how good a job they are doing in building long term relationships or at bringing back information from customers that they may not bring any back because there may be nothing to bring, you're going to have very bad noises measures there. So you can have biased measures that just don't capture what they're doing, or noisy ones like we talked about before. Or even more measures that are manipulable where it's possible for the agent to do something that affects them measure even though it doesn't do anything to affect the contribution that he's making to the principal's interest. So things like monkeying with the accounting numbers, if you're paid on the accounting numbers, you have every incentive to make your accounting decision so you look better even if they don't do anything to advance the shareholders interest. Or a famous example example that my colleague Paul Oyer documented of salespeople who were paid a bonus if they reached a quota and then nothing more after that. So the performance pay looked like they got no performance pay until they hit their quota. Then it jumped up and then it went like that again, flat. So it turns out if you got near the end of the period and you were a long way from making your quota, you'd stop selling and try and save the sales for next period when you might make it. And of course if you'd already made quota, you'd stop selling as well because there's no sense wasting the sales. You can use them next period. And so none of that is in the interest of the firm. The interest of the firm is having smooth sales, but that, that they're manipulable numbers. Well, the problem is if you've got poor measures on some of the tasks, you can't give strong incentives for the well measured ones and weak incentives for the poorly measured ones like you'd like to, because you have to give balanced incentives. So you either have to give strong balanced incentives, strong for both. That's just too costly. You end up giving weak incentives for both. So if you want your sales staff to be building a lot of relationships, bringing back a lot of information, what you have to do is not have them be operating just on commission, put them on salary, make them full time employees, tell them what to do. If on the other hand, all you care about is sales, then hire an outside decision distributor, let them pay them a high commission and send them up to make your sales numbers. So strong incentives are necessitated by multitasking when there's not everything has good measures. Third context is when cooperation is needed. And this is this issue of cooperation is one that, that I started thinking about with Jonathan Day here and Bent Holmstrom 1012 years ago, 14 now. But if you think about why we have organizations, we have organizations to coordinate and motivate people in the presence of spillovers, situations where what I do has an impact on what you do. And we take those. The market doesn't typically work so well when there are such externalities. And so we take those situations out of the market and sometimes we regulate them by the state doing them or the state overseeing them. Sometimes we put them inside organizations. The putting inside organizations doesn't obliterate the need for coordination on these complicated spillovers. And you can't. It's very hard often to measure them because sometimes cooperation will be nothing more than refraining from screwing the other guy over, not behaving too nastily within the organization. And it's really hard to measure how often the guy has not been a bastard. He's leaving on me, so I'm trying to liven it up a bit. He was supposed to leave Ermias. So cooperation is behavior that works to the advantage of others, others in the organization works the overall corporate interest. And it's often now, often this could be modeled as an action that's just hard to measure. And then you can use the multitasking framework and you can talk about initiative, doing your own job and putting cooperation helping other people. But there are other contexts where that modeling isn't quite right. For example, one that really starts with some work of Susan Athey's in mind, but recently has been done by. A couple of young German economists, Friedel, I believe, and Reith. So there's multiple agents. Think of them as division managers. And each of them has to take an action that builds his performance of his division. And he also has to make an investment decision. And the investment decisions may have spillovers under the other division, maybe it affects the reputation of the corporation or has some impact on sales or whatever. The only measure of either agent's performance is something like cash flow in his division. And cash flow in his decision is going to reflect what he's done, the investment decision he's made. And if there are any spillovers, the investment decision the other guy's made. So if there were no spillovers and the action, then what you'd have is a measure just of his cash flow would be a measure of his decision, his action, and could well be the basis for strong incentives. But if you give him strong incentives to pursue his interest, you're giving them, you're dissuading him from paying any attention to the other guy's interests to the extent that they're not perfectly correlated. So if I can make an investment that's good for me and bad for you, I'm just as happy to make that as to make one that's good for me and good for you. And if it's a little bit better for me, I'll take the one that hurts you. So the solution there, if they're important spillovers, is to reduce the weight, put on your own division's performance and make pay more on corporate performance or equivalently on the other guy's performance, to which you'll then be motivated to contribute to make better investment decisions because you're getting paid on both. So it's weaker incentives on your own performance. A former student of mine now at MIT named Gustavo Manzo has studied inducing people to experiment. And it turns out that weak incentives are often the secret. So here's the situation you should think of. There's some task that has to be done and there are at least two ways to do it. And one of them is the way we've been doing it all along. It's tried and true. We know what the chance of success is if we do it the old fashioned way. We know there's a point eight chance that it'll succeed. On the other hand, somebody suggested a new way of doing things. And this new way of doing things we don't know. We don't have experience with it. We don't know what its chances of paying off are. But we guess just up front that rather than point eight, they're 0.7. So if all we were to do is decide which way to do it and that's it, we'd go with the old fashioned way. But suppose we're going to be making, carrying out this task repeatedly. So what we could do is we could try the new way. And let's just say that the mathematics works out that well, it'll be the case that if the new way succeeds, you'll up your probability of its probability of success. And if it fails, you'll lower the probability of success that you leave it to hold. That's standard Bayesian updated. Suppose then that it's going to be if you try it and it succeeds, you then think it'll succeed in the Future with probability 0.9. So what could you do? You could, in a repeated context, it might be worthwhile to try the new way. And if it works, it then looks better than the old way and you continue with the new way. You experiment with it. If the experiment succeeds, you adopt the new way. On the other hand, if the new way fails, you think it's really very unlikely to be better than the old way and you go back to the old way. So that's exactly what an experiment is. So if you're our friend, the principal, you've got first of all, you have to motivate the agent to work in the first place, so you've got to keep him from slacking off. And then you have to decide whether you want them to experiment or stick with the tried and true method. Well, if you want to stick with the tried and true method, it's just exactly what you learned if you ever took agency theory. You just give them strong incentives for success and that's it. On the other hand, if you want to motivate experimentation, particularly if the agent would prefer to do the old way, if that's easier for him, if he doesn't get a thrill over trying something new, then you shouldn't reward him for early successes, or certainly not strongly because he's more likely to succeed. If he's done what you didn't want him to do and did the old fashioned way, you would expect remember you thought that was a 0.8 chance of success. If he did what you wanted him to do, it's only 0.7 chance of success. So if he succeeded in the first round, it suggests that he may not have done what you wanted him to do. So you shouldn't reward him too strongly for early successes. And in fact, you may have to reward him pay them more for an early 50 failure than an early success. If he really wants to do the old fashioned way and you really want him to experiment. So that means in the first period you give weaker incentives than you would have otherwise.