Loading summary
A
In a world that's very uncertain, where even attaching probabilities to risks is often a fantasy, there are two questions you could ask yourself. One is, which strategy will maximize return on investment? Second, which strategy will give us good enough returns on investment under the widest set of future circumstances in the world? That is to say, you are acknowledging how uncertain the world is and asking which path will be most robust if the really uncertain things happen. And if you decide when I retire at 65, I'm going to need $2 million worth of assets, what's the safest way to amass $2 million worth of assets? Given an uncertain future world? That's a very different question from what's the way to maximize the assets? Your goal is a good enough result, not the best, and you are willing to sacrifice value in return for security
B
welcome to the Work for Humans podcast. This is Dart Lindsley. You may not have heard the term rational choice theory, but it is so pervasive that we don't even notice that we're using it as the default standard for smart decision making. We talk about trade offs in optimization and ROI and risk adjusted value as if that's what it means to be rational. Over time, though, this way of thinking has stopped being just one tool in our tool belt and has become the default lens through which we see people and organizations and work. Barry Schwartz argues in his most recent book, Choose Wisely, that the decisions we face in real life are a lot more complicated than than a hand of poker. Barry's a psychologist and longtime professor at Swarthmore College and UC Berkeley who's written extensively about choice and motivation and work, including his most famous book, the Paradox of Choice. In this episode, we discuss how to take off the blinders of rational choice theory and deal with decisions when values, meaning, and moral judgment are involved. We talk about what happens when models designed for markets and games get applied to work. We explore how quantification can displace judgment, why treating decisions as isolated moments misses the larger story of a life, and how our ideas about rationality have quietly shaped how work is designed. So if you enjoy the show, of course leave a review and subscribe wherever you listen to podcasts. And now I'm very pleased to bring you my conversation with Barry Schwartz. Barry Schwartz, welcome to Work for Humans.
A
It's my pleasure to be back with you, Dart.
B
Your most recent book, Choose Wisely, came in at a really fantastic time for me. I wish I'd had it years ago, and I'm going to summarize it this way. This is the way I kept thinking about it over the week. I kept thinking to myself, a psychologist and a philosopher walk into a bar and what's the punchline? The punchline is they get in a bar fight with the economists and in my opinion, win. And so Choose Wisely is about how we decide. And it takes on something that I thought was, well, it's canon, which is rational choice theory and the appropriateness of rational choice theory. So let's start at the beginning with what is rational choice theory? And we're probably going to start saying RCT just because the book refers to it, but rational choice theory.
A
So it's a simple idea that comes from economics, and it says that when we're faced with a decision, there are two things we have to think about. One, how good will each of the options be? And two, how likely is it that they will be that good? The world is an uncertain place, so you may expect a wonderful meal and be disappointed. And so you've got a value, it's going to be this good. You've got a probability, there's a 50% chance it's going to be this good. You multiply them together and you get what's called your expected utility. And according to rational choice theory, the way we should be making decisions is for any set of options we face, we calculate the expected utility of each option and we choose whichever one has the highest expected utility. It's that simple.
B
Sounds very reasonable.
A
It sounds very reasonable. And in my field, the study of decision making, what has become clear over the last half century is that however good that theory is as a standard for making rational decisions, it fails as a description because people make mistakes. Psychologists have spent 50 years cataloging and demonstrating the errors that we make when we're faced with those kinds of decisions. So that as a description of how people decide, it fails as a norm for how people should decide, it remains firmly in place. In fact, to some degree, the descriptive research gets its credibility from having this clear standard of what the right thing to do is. And you can say, this is the right way to do it and look at what these idiots are doing instead. This is the right way to do it and look at what these idiots are doing instead. So to some degree, the more research has been done on our failures to act that way, the more legitimacy rational choice theory has gotten as the standard.
B
What's the classic sort of gambling example?
A
It's interesting you say that, because one of the things we argue is that the gambling situation is the classic example. Do I hit or do I stay at a blackjack table? Do I bet on with a shooter or against the shooter at the craps table? Do I bet on heads or tails when a coin is flipped? And here, too, you've got two things. How much do I win if I win? How much do I lose if I lose? That's the value. How likely am I to win? How likely am I to lose? You put those two together and you get an expected value, an expected utility, and that's the gamble you should make. It's sort of like the way Daniel Kahneman described it, is that the gambling situation is like the fruit fly of decision making. You study genetics and fruit flies because it's so convenient. And then you take these basic principles and you extend them to much more complex genetic mechanisms. But it makes sense to start simple. And what we argue in the book is that rational choice theory, as bad as it is as a description, it's even worse as a norm, that the world simply does not allow us to make decisions in that way unless we so artificially limit, constrain, and frame the world that it looks like our decisions are just like gambling decisions. So the book starts with a very simple question. You wake up on a Saturday morning. It's beautiful. And you ask yourself, what should I do today? And we go through a hypothetical, talking to yourself about what the possibilities are. And this is not a complicated situation. It's pretty simple. And there's no way on earth rational choice theory can capture either how we do think about it or how we should think about it, because there are so many possibilities, it's not clear how you can compare them. Should I help my daughter pack up to move? Should I go visit my mother in the nursing home? Should I go work for the volunteer organization that I'm committed to? Should I get ahead of myself in the work week that's facing me? Should I get exercise? Should I read a book? Should I watch sports? How do you compare these things to one another? And the idea that you can use the word utility as a common standard, that you can use to compare everything with everything, is the crux of the problem. And what makes it seem so reasonable, let me just say, is that we're used to doing this not with utility as our measure, but money as our measure. That's the point of money. A dollar is a dollar, and you can say, how much is this worth to me? And how much is that worth to me? And there's a common standard, and we use it all the time. But the notion that this is the way the decisions we face in real life come at us. And this is the way we should think about them is what the deep problem is. And I think it is a very deep problem.
B
It's interesting. So my family has a background with fruit flies. My father was a fruit fly geneticist who advanced the model of fruit flies as a model in biology. The truth is that using the fruit fly, and I think as condimented, is a false analogy because a fruit fly is a whole organism. And there's a way in which when we face a complicated problem full of human choices and we try to trim it down to a math equation, we're essentially trimming off all the wings and the legs and the antennae and we're not actually talking about the whole decision making space.
A
Right. I'm glad you said that, because at the level of mechanism, that is the way genes operate and recombine, maybe the fruit fly is a good model. But when you're asking what do these genetic transformations do for the well being of the whole organism, it stops being a good model because that will almost certainly be context dependent. It isn't the combination of dominant and recessive that determines the outcome. It's the fit between the organism and the environment that determines the outcome. On the other hand, if you want to understand molecular genetics, it's probably a pretty good sample space to be working in because at least the genetics of it is simple. But translating that into an understanding of how organisms go through their lives and go through generations is quite another story. So I think it's not a bad analogy. And the critical thing is to believe that somehow the micro entities that you're studying and coming up with generalizations about don't change as the situation increases in complexity is the key assumption. So as things get more complex, you need to add more stuff. But the fundamentals remain the fundamentals, and there's no reason why that is necessarily false. It could be true that the fundamentals remain the fundamentals, but it isn't automatically true. And in these days of artificial intelligence, people have started talking about. It's an old idea, but I'm glad it's coming back into people's attention. The notion of emergence, that at certain levels of analysis, things emerge that you could not see from lower levels of analysis, which I think is probably as universal a principle as there is. But this desire to be rigorous as possible gets us to forget that and think that the strategy is to reduce to simpler, simpler, simpler, simpler, and then you can just put it all back together with scotch Tape, and that's false.
B
What are some of the steps that are required to turn decisions, complex human decisions, into. Into gambles? In other words, to turn what is an open system of reality into a closed one that can be turned into a formula?
A
That's the key question. And the language we use to talk about that involves the use of the concept of framing, which is a big deal in decision making research, because often the way a decision is framed affects the choices that people make. And you can mislead people into making bad decisions by framing the options in a certain way. So framing sensitivity to framing is thought of as a defect. And ideally, you want your decisions unframed so that you can see them as they really are. And our argument is that there's no such thing as a decision that is unframed. Everything is framed. And if you literally were faced with an unframed decision, there's no way you could ever make it. So we depend on frames, and the critical question is, are we using good ones? So the gambling casino essentially tells you everything in the universe except odds and payoffs is irrelevant. What more rigid way to frame a situation is there than that?
B
Hey, everybody. On June 16th, I'll be speaking at one of my very first favorite venues. It's the Future Talent Summit in Stockholm, Sweden. To get Tickets, go to futuretalentsummit.org that's all one word, and enter my speaker promo code elevenfold, which is eleven fold, to get a big discount. If you're in the area, I would love to meet you there. That's futuretalentsummit.org, promo code 11fold, And that your objective should be self maximization. So one of the experiments that was done was asking people to make bets in such a way that were rational. And I really want to point out that the word rational is loaded because what's the alternative? Well, irrational decisions, which there's a whole sort of business out there of essentially showing how people make stupid decisions. Freakonomics is an example of that, which is it's saying, look, we're going to narrow it down to a very simple equation and then we're going to say how people are not optimizing this simple equation, which is actually simpler than the world. So one of the experiments described somebody who's being set up to maximize the amount of money that they were going to make if they made a particular gamble. And my thought was, well, who's going to get the money if I don't win it? If I don't win it? Could that go to graduate students if I don't win it, is it something that's going to go into the pocket of the researcher that might actually be contextual for me, but it's been framed as your job is to optimize your personal outcome. And that's an assumption about a massive simplification about how people think about how to make decisions.
A
Absolutely. And so, you know, in this case of what should I do on a Saturday? Is this a me question? Is it a we question? There's a sense in which you need to answer that before you start arraying the alternatives. Also, in a typical experiment, the options are given to you, whereas when you're actually thinking about what to do as you're deliberating, new options appear when you're trying to decide whether to go for a hike or just a gentle walk. Maybe there's a third option that hadn't occurred to you that's midway between those two. So you get exercise and you see beauty and you don't knock yourself out. So the very process of deliberating adds options or alters options. And that, it seems to me, is part of what it means to be a rational decision maker. You are open to being educated as you go through the process of thinking about what to do. And you're right about that word rational being loaded. It is such an honorific. If it were called self interested choice theory, what's the big deal? But rational choice theory means this is the way you should be living your life, this is the way you should be making decisions. Nobody wants to be irrational. That's why it's had such a big impact, I think, and in business schools is what they teach.
B
That's exactly where I was going to go next, which is how it affects businesses. And I can tell you, having been a decision maker in businesses, having been responsible for portfolios, the question, there's lots of complicated ways to say it is what's the roi? Which is we're going to turn this decision into a return on investment and we're going to try to focus it down to this particular project's roi. And I understand why we do it. This is an incredibly complex space and we would like to arrive at some sort of certainty and we would like to at least be able to say I made the decision because look, a number, you know this number to do that, and I can tell you from experience, I call it lying by caveat, which I'm going to say this number is true. If all these caveats are true, if the market grows by this much if this grows by this much. But the caveats are so massive that the truth is the decision was sitting on nothing really. But we were able to make a number that we could point to.
A
But you know, what happens, at least from my outside perspective, is you make the caveats real by hiring people who are called risk analysts. So, yes, there are all these caveats, but can we attach numbers to them too? And the answer is, you damn well better, otherwise you're going to lose your job. This is another big secret to this. We worship quantification. And there's a lot of research that shows when you give people relatively simple decisions that involve, say, more than one dimension, how good is the food, how expensive is the restaurant? Whichever dimension is quantified dominates the decision. If you give it a 7 on a 10 point scale, you'll give it more weight than if it has seven stars. There's something magical about numbers that even if you don't think it's that important, that it's a pretty restaurant, if that has a number attached to it, it will increase in importance and have more influence on your decision than it should just because it's quantitative. And that protects us, this quantification. It protects us from our own biases. It protects us from being accused of playing favorites. The reason he got a raise and you didn't is that his performance metrics were 20% higher than yours. I wish it were some other way, but I'm just going by the numbers.
B
I mean, I've always argued performance is largely unknowable, but if we can dress it up with enough quantification, it will look like we're being fair, or at least we can say we're being fair.
A
We have a justification that we can point to. And you would start unpacking the justification, whether there's bias in the metric that is hidden so that you're not really being objective. It's just buried in a metric that looks objective as a separate matter and no doubt a really problematic one, because there often is bias in the metric, but still the power of quantification. And, you know, I think in the grand scheme of things, this is a reflection of what we sometimes in social science call physics envy. We want our discipline to be like physics. We want to be able to tell you that on March 18, 20, 2072, there will be the next full solar eclipse. And it's going to happen at 8, 14 and 30 seconds. And until we can do that, we're just chasing physics. And what it means to chase physics is you try to do exactly what physicists are doing, only you're talking about animate living complex context dependent things and not the motions of subatomic particles.
B
I'm going to make a bridge. I want to start going into what the alternative is. I just really want to emphasize the degree to which work is influenced and our experience of work is influenced by the assumptions. And in our last conversation, you brought up Skinner and B.F. skinner's Skinner boxes, which is this idea that people have no internal life, that we can just see them as a series of inputs and rewards and punishments. If we just control the inputs, we can get the outputs we want. And there's a way in which all companies have striven to make themselves Skinner boxes.
A
Yep. One of the papers I wrote that I'm proudest of is called Skinnerian Psychology as Factory Psychology. And the argument that we made in that paper is that Skinner claimed that he was discovering something about nature, a fundamental truth. But what really was happening is that he was describing the workplace of a typical 18th, 19th century working person, which is to say the factory owner creates a Skinner box and behavior inside that environment looks just like rats and pigeons inside their Skinner boxes. And the question of whether that environment is a natural reflection of the environments that people actually have just goes away because fish don't know they live in water. And in the industrial age, we didn't know we lived in factories and that there was another way to organize work and another way to organize life. So what he thought he was doing, he was providing a pretty good description of an environment that itself had been turned into a Skinner box. Not by Skinner, but by Frederick Winslow Taylor and other people like him.
B
In my recent studies, I've realized that management theory has really worked to get beyond that, but practice is still haunted by it. We still have a lot of practices that are haunted by those ideas. Before we go on to how to choose differently, I think it's important to talk about not just the you might choose wrong problem of rct, but the moral hazard of rational choice theory.
A
That's complicated because what a devotee of rational choice theory will say is that when you create your Excel spreadsheet, you can put any set of values in that you want. So that, for example, it isn't just about me winning a gamble. It's about what effect will this have on the people I love? What effect will it have on my family? What will it do to the environment? What will it do to the world? Each of these aspects of the decision gets an importance weight and a value assigned. So it could be a good way to make moral decisions. That the fact that in actual fact that rarely happens is a separate matter. But in principle it could happen. And you know, the philosopher Peter Singer, very utilitarian oriented moral philosopher, keeps writing about widening the circle so that it's not just about you. It isn't even just about you and your family. It isn't even just about you and your country. It isn't even just about you and other human beings. And the moral progress is the ever expanding of the circle of concern. And you see this now in this movement called effective altruism, where people are really trying to decide what's the absolutely best thing they can do with their resources to improve collective welfare. And they take it quite seriously. And they think that in principle it ought to be quantifiable. And in principle you can say this is the right way to spend your extra money and this is the right set of political demonstrations to be going to and so on. So there's still the temptation is to use rational choice theory, but broaden the scope of things that it takes as relevant. And look, I think that's an improvement, but it still transforms the problem from the real problem we face into a toy problem. And I've had lots of disagreements with people who are very enthusiastic about effective altruism, where I suggest that's what they're doing. And they say, well, what the hell's the alternative to doing that? You want to be as thoughtful as you possibly can be. And that framework of rational choice theory helps us be as thoughtful as we can possibly be. And I then say, no, no, no, it's not being thought. You're counting, not thinking.
B
Right, Right. So let's talk about the alternative to rational choice theory. There's a very nice quote in here, which is wisdom tells a person not to expect more rigor than the subject matter at hand allows. Which, by the way, is sort of a hazardous thing to say in a book. The standard for academic writing is that for every point you make, you're going to show an experiment that supports that particular point. And that experiment is made on a very local part of the system in a very fixed way with all of the other variables taken out. And we're supposed to use that path to be rigorous. It is poor decision making to actually expect rigor in some cases.
A
I think that's correct. It may seem like a risky thing to say that you shouldn't expect more precision than the situation allows. But it's funny how easily we get trapped into thinking that the push should always be for more precision, for more quantification, that if we can't specify very clearly what we're talking about, we're doing something wrong. And I think the pervasive importance of context, the openness of most of the systems within which we are making decisions, virtually guarantees that such precision will be a false depiction of what we actually face. And the more social the situation is, the truer, that is. So in a workplace where you're making decisions about how to do your work, but you're also making decisions about how to organize the work of people you supervise, what kind of feedback to give them, how to get them to work together, whether to get them to work together. What we suggest as the alternative to rational choice theory is it's not nearly as satisfying, but it takes judgment, it takes reflection, it takes, you might describe as, having bifocals, looking at the short term and the long term and going back and forth between them. Because sometimes what's going to be a terrific solution in the short term is going to pose serious problems, morale problems or productivity problems in the long term. So which perspective is relevant? The answer is yes, they're both relevant. And there's no formulaic way to trade them off against one another. It takes judgment. And again, in economics, they say, of course, there's a formulaic way. We call it the temporal discount function. How much do we discount next year's profits compared to tomorrow's? There's a function that describes that, and there's a way we ought to be discounting the future. And if we don't discount the future adequately, we'll make bad decisions. So the aspiration is always to make it as precise as possible. And one of the arguments we make in the book, everybody is familiar nowadays with Excel spreadsheets, is that the spreadsheet does two things. You're trying to decide what job to take, so you list all the things that might matter to you, and you've got all the people who are begging you to come work for them. Lucky you. So this is a helpful thing to do, because it's quite possible that unless you are systematic about this, you will ignore or minimize aspects of the decision that actually are going to be important. So you want to make sure that you're thinking in the large. What's it going to do to my family? What's it going to do to my social life? How about the quality of life? And so on? How interesting is the work? Put it all out there. Good for you. The mistake is to think that you can then Put numbers in all the cells and push a button and out will come the answer to the question, which is the best job? For me, the real work is creating the spreadsheet, not filling it out. Filling it out is mostly a self deception, I would argue.
B
I want to go through, and I think this touches upon one of them, I want to go through some of the specific recommendations. One of them is understanding rather than prediction. What does that mean?
A
We have this idea that the point of, say, science is to make true predictions. And that assumes that there's a kind of orderliness and regularity to the domain we're studying such that true predictions are possible. The problem, as philosophers have noted, is that for any set of data points you've already collected, there are an indefinitely large number of new data points you might predict, depending on what you think the function is that underlies the data you've got so far. If you think you got a straight line with a positive slope, you'll make one prediction. If you think you have a curvilinear relationship, you'll make a different prediction. So what's the underlying relationship between X and Y? It doesn't lead to a unique prediction. Each one leads to a different unique prediction. So unless you understand what's going on between X and Y, you don't know which underlying curve is actually capturing the data that you've got so far. So predicting the future is never mechanical. It always depends on some level of understanding of the phenomena you're interested in and the processes that you think are driving those phenomena. And the way you understand things is by thinking about them, by trying to relate them to other things that may, on the surface, seem unrelated. The better you are at doing that, the deeper your understanding of the phenomenon, the more likely it is that you'll be able to make predictions about what's going to happen next. But it's not a mechanical process understanding. It's a process that requires reflection.
B
You recommend narrative unity rather than atomized episodes, right?
A
This is an argument, it's more an assertion than an argument that it would be catastrophic if people started measuring their lives in coffee spoons. As T.S. eliot said, the whole emerges. The whole is not merely the sum of the parts. And it is important to ask yourself, when you're facing a job decision or a vacation decision, or ending a marriage decision, is how that decision will affect the sweep of your life. That is, it's important to at least imagine that your life story is a story and it isn't a bunch of unconnected episodes. You can't just take the deck and shuffle the cards and play out your life in any order, whatever order the cards come up in. You're going somewhere, and you want to be going somewhere. And having a sense of where you're going is what enables you to make sense of the decisions you've made thus far and to be wise about the decisions you're going to make tomorrow. All of that requires holism, not atomism, when it comes to looking at your own life and other people's lives. But, of course, if your focus is on quantification, you want those atoms, you know, that bind and become molecules.
B
I will tell you, this is one of the largest challenges in managing a portfolio of projects and deciding. So, for instance, one of my jobs was to at least strongly influence what large parts of corporations would invest in. Well, there's one way of doing it, which is you take every row of possible investment and you give it an roi. Now you're already in trouble. And the reason is that different rows are going to be after different things. Some are about reducing risk. Some are about improving customer experience. Some are about directly reducing cost. And so the truth is, any conversion of those to comparable dollars is already a false simplification because of the caveats. And so you go through and you just figure out, well, these are the best investments we can make right here. But there's another way to do it. And the other way to do it is architectural, which is, are we as a company, do we want to be more like a Prius, or do we want to be more like a Ferrari, or do we want to be more like a big truck? Because the truth is, the pieces of a vehicle like that are all fit for the overall purpose of the truck or the vehicle. And so this architectural approach can be a lot more effective. And the challenge there is that what you want a truck to get done, there's a lot of variables that you're trying to optimize for, but they're different than the Prius.
A
But you know, what this comes down to, to use a Greek word, is asking yourself, what's the telos of our organization? What are we here for? In what way does our existence add value to the world? You know, I've heard people who teach in business school say that that should be the first question you ask when you're thinking about starting a business, is, what value will this business add to the world? And if the answer to that is nothing, then that ends the argument about starting the business. And my sense in the tech world is that the early pioneers, the Google people, even Steve Jobs, that was their mission. What value can we add to the world? And from my point of view, Google has transformed the world as much as anything I can think of. Making all of the world's information to all of the world's people is one hell of an aspiration. And they kind of did it, and then it became a business. So you only want to make some of the world's information available to only some of the world's people, so that they'll spend more time on Google and they'll look at ads more and you can charge higher rates for ads. I think this may be when the founders of Google stopped being interested in the whole enterprise, because this vision that inspired them had gotten diluted by the practicalities of running a mega corporation and they wanted to sort of wash their hands of it and work in the little corners of the enterprise where there were still aspiration for extraordinary accomplishment. It's not an accident, I would argue, that one website that stands apart from all the others in my experience, and that is Wikipedia. Whatever its imperfections, and there are plenty, it has never lost sight of its mission. And it is not an accident that Wikipedia is not a profit making organization. Will they get things wrong? Of course they'll get things wrong. But the one thing that you can count on is purity of motive. Will there be threats to the purity of motive? Of course there will. There's no doubt all the time there are people who are figuring out how to monetize what Wikipedia does. But somehow, as the tech presence has gotten bigger and bigger and bigger, Wikipedia has managed to hold true to its initial mission in a way that no other entity I can think of has. And right now we're in the midst of seeing the same tension appear and probably be resolved unsatisfactorily. With respect to AI, the monetization temptation is overwhelmingly strong. And they're already making compromises with what they know is the best product they could provide because of the revenue models that underlie the massive investment of resources in making these large language modules work. So I think keeping your telos firmly in mind is really a critical aspect of both starting and maintaining entities. Jimmy Wales started it and he and I were both giving talks at some meeting, I don't remember what. So we had a chance to talk a little bit and I said to him, are you worried about succession? How are you making sure that the people who follow you will be as committed to your mission as you've been? And he just sort of swept it away he said he was so confident of the purity of motive of the people involved. He thought that that would take care of itself and he didn't need to put anything in place to make sure it took care of itself. And I thought he was being naive, but, you know, that was easily 20 years ago. And it seems to me, so far, so good.
B
This is related to a point you make, which is that maximization is not always best. And to some extent, what we're talking about here is financial maximization. And that the assumptions of RCT is that the goal of the game is to maximize something and that the goal of the game should maybe not be maximization.
A
There are two problems with maximization. Well, maybe more than two. One is the psychological costs associated with that as an aspiration. And you know, I've written a whole book about the problem of too much choice, and particularly how acute it is when you think the point in every decision is to get the best. It just fills you with self doubt, with confusion, with frustration, with regret, with disappointment. That's one problem. The second problem is maximizing assumes that whatever it is you're aspiring to can be quantified. Otherwise, how do you know you're maximizing? And sometimes it can't be quantified. Good enough, as opposed to maximizing is vague enough that you can say this is a good enough outcome without resorting to the slavish devotion to quantification, because you're not worried about whether this outcome is better than that one. You're worried about whether this outcome is a good enough outcome. Is a 7% rate of return good enough for our hopes and dreams, for this company? And so you can resist false precision and excessive weighting of quantification if what you're looking for is good enough results because of the inherent vagueness of good enough. And so I think that's a real contribution because it will inhibit people's temptation to find quantitative answers to every question. But your shareholders won't let you get away with it.
B
Well, that's exactly who I was going to talk about, which is, okay, so people are going to invest in my company and they're going to say, are you going to give us good returns? And we say, they'll be okay. You should still invest in us. We're not going to go crazy with the returns on your investment, but they'll be okay.
A
And, you know, here's the thing. In a world that's very uncertain, where even attaching probabilities to risks is often a fantasy, there are two questions you could Ask yourself. One is, which strategy will maximize return on investment, taking uncertainty into account? Second, which strategy will give us good enough returns on investment under the widest set of future circumstances in the world? That is to say, you are acknowledging how uncertain the world is and asking which path will be most robust if the really uncertain things happen. And if you decide, when I retire at 65, I'm going to need $2 million worth of assets to see me and my family through. What's the safest way to amass $2 million worth of assets given an uncertain future world? That's a very different question from what's the way to maximize the assets? And notice this is not simply adding risk into the equation. Your goal is a good enough result, not the best possible result, and you are willing to sacrifice value in return for security.
B
This leads to the question of how to decide when rational choice theory is useful and when it isn't. So just a few weeks ago, I had on the show somebody who has interviewed 180 people over 80 who are working, and some proportion of them are doing it voluntarily. They have enough money, it's not a problem. But there's another large proportion that are doing it because they have no other choice. And one of the things that struck us in the conversation is that some rational choice theory about financial investments early on might have been really helpful. But that's a very mathy kind of problem with probabilities associated with it. We have a lot of data about what the fluctuations in the market are. It's more susceptible to the kinds of solutions of rational choice theory than what college should my daughter go to?
A
Yeah, no, absolutely.
B
And so knowing where that line is, when am I leaving the zone of this is a useful way to approach choosing, and when am I moving into one that isn't? Are there key indicators that would lead me to say maybe this is not a math problem?
A
I think there are two things to look for. One is, to use the fancy word we use in the book, when you're contemplating options, are the various outcomes that you're thinking about commensurable with each other, that is to say, evaluatable on a common scale? And arguably, when your focus is on return on investment, there is only one scale. When it is return on your time as a college student, it's ridiculous to think that there's a common scale. Being at a school that has the number one ranked football team in the country will certainly give you some pleasure while you're a student. And how do you compare that pleasure to the fantastic Bio one teacher you had as a freshman. What's the metric? Is it smiley faces? Is it how fast your heart beats? What's the metric that says a great Bio I lecture is worth a tenth as much as having the number one ranked college football team in the country? It's ridiculous. So the clearer it is that what you're trying to do is make a decision that has aspects that are not comparable to each other, the more you should avoid the rational choice theory framework, which forces you to array everything on the same scale. That's one second is when you're assessing probability. How likely is it that if I make this decision, I'll get what I'm expecting? Can you really assess probability in that specific way? When I throw two dice, I know exactly what the probability is that they'll come up seven. There's no seven. Ish. It's either a seven or it's not a seven. And we can specify very precisely what the odds are of seven and every other combination. When you're trying to decide whether to go to the beach on a weekend and the weather forecast is that it might rain, is that the same? Well, it really isn't for two reasons. One, when they say there's a 30% chance of rain, do they really mean that there's a 30% chance of rain? And second, not all rain is the same. You know, if it's a drizzle, it'll have one impact on your trip to the beach. If it's a downpour, it'll have a different impact. So you need to be asking, why am I going to the beach? What's my objective? What will give me pleasure and satisfaction? So there's a kind of, what I call radical uncertainty associated with many of the decisions we face in life that a gambling table doesn't capture. And so if you are suspicious about all values being comparable on a common scale, and you are suspicious about being able to actually identify precise probabilities, there is good reason to think that rational choice theory is not the right tool. And I don't think people think about that much. Uncertainty is uncertainty. There aren't different kinds of uncertainty. And. And that's not right. There are different kinds of uncertainty.
B
I'll tell you how it really fits with the idea of the narrative, which is one of the questions I think we ask when we are faced with complex decisions. Should I visit my mother this weekend or should I go to the baseball game? Is to some extent we are asking ourselves, who do I want to know? I am afterwards. That's the narrative thing, which is, it's not just what's the story, but what character am I playing in this story? And do I still want to have that character after that decision?
A
That's exactly right. What character am I playing? What character do I think I should be playing? What character do I wish I were playing? All of those things. You know, there is a sense in which every decision we make is a photograph of our character at that moment in time. And the question is, what do you want that photograph to look like? Who do you want to be? Every decision is an existential decision. It is instructing you and the world about who you are. And again, the notion that there's a simple scale of your quality as a human being and what you want to do is go as high on that scale as you possibly can, is fanciful, and it will change. This has to be a me day. I'm exhausted. I've been working hard. I need to take care of myself today. I'm not a selfish person, but that's what I need. And I'll be better as a parent and a friend and a partner if I take care of myself for this one day than if I don't. Well, those are good conversations to have with yourself. They're even better conversations to have with other people, since we tend to get locked into a certain bias. And sometimes other people can see us more clearly than we see ourselves.
B
Kahneman described two different systems for decision making. System one and System two. System two looks like rational choice theory. System one is more instinctive, more intuitive. Is it possible that System One is doing this for us? Is it possible that, to take sort of an evolutionary approach, System 1 evolved in the real world, not the theoretical world. And is it possible it's actually functioning in some of these ways?
A
Oh, no. I think it's quite possible. What we try to do in the book is suggest that the dichotomy is too limiting because it's instinct, intuition, thoughtless on the one hand, and deliberative, calculative, quantitative on the other. And what we're trying to suggest is that there's a kind of middle ground where you actually think about your decisions, reflect on your decisions, but you don't do it in a quantitative, limited way. So it's like System one with conscious awareness and reflection, weighing various things against one another, deciding how to decide what's more important than what. None of this will be formulaic, but it also won't be automatic. So I think the framework of rational choice theory is so powerful that it's almost impossible to imagine a thoughtful alternative to rational choice theory. So it's either rational choice theory or it's a knee jerk. And we're trying to say there's a third way.
B
I want to talk about some things that you describe. One of them is idea technology, and the other one is the relationship between rational choice and ideology. And there's one thing that smacks of ideology to me is that there are smart people who make rational choices, and then there's the hoi polloi who make irrational choices. And I felt that in a lot of the behavioral economic stuff, which is the benighted people who don't use rational choice theory. But it goes deeper than that in terms of what it assumes about people and what it assumes about what should be. What's the ideology? There's an ideology that underpins rational choice theory. Assumptions about what should be true, assumptions about who people are. And what's fascinating about that is that there's a frame around rct, the kind of frame that RCT would say is a bit of a problem. What is that frame?
A
It's incredibly focused on individualistic self interest, and it's incredibly focused on material self interest. And people who are enthusiasts about rational choice theory will say that neither of those is necessary. As I said before, you can widen the set of values that you're aspiring to achieve. You can enlarge the circle of organisms you care about and all that stuff. So it's not intrinsic to rational choice theory, but it seems just a coincidence that people who use rational choice theory are almost always calculating self interest. And it's ideological in the sense that, starting with Adam Smith. Adam Smith said famously, it is not from the benevolence of the butcher and the baker that we get our daily bread. It is from their pursuit of self interest. So if you have a system, a competitive system of people, each of them acting in their own interests, it's a system that serves everyone. It needs to have certain properties. It needs really to be competitive. You can't have monopolistic power. In fact, you can't have power, period. It needs to be the free exchange of goods and services. But he thought that we need to, as one economist once wrote. What does the economist economize on? He asks. He says the economist economizes on love. And what he meant was, that's the really scarce resource, love for one another. So how can we create a system that will serve us well even when there's a shortage of love?
B
Was that Adam Smith?
A
No.
B
Oh, who was that?
A
I can look it up. I don't remember a very distinguished economist in the early part of the 20th century.
B
Oh, I'm going to have to find that.
A
It's really a stunningly insightful comment because we economize on the things that are scarce. So the scarcest resource we have is love. So you build a system that will work even when love is scarce. You don't need to economize on love inside the family, but you do need to inside the larger society. So competitive pursuit of self interest produces a relatively humane system in the face of scarcity, of love.
B
That's the hypothesis.
A
That's the hypothesis. And that was sort of what Adam Smith was saying. Yeah, it'd be nice if people were nice to one another. It would be nice if people cared about the welfare of other people. But we need a system that will work even when that's not true. And the free market is that system. Now, what I've done in earlier books I wrote a long time ago is argue that the mistake that Adam Smith made is to assume that there were limits to how badly people would treat one another. And he failed to appreciate how if there are limits, it's because of social institutions that essentially tell us what's the right way to behave and what's the wrong way to behave. And he failed to appreciate that the thing about the market is that it corrodes those moral commitments. And so when you need them most, they're no longer there. He thought it was human nature to be basically decent. Well, it's human nature to be basically decent in a particular set of environmental conditions. The free market is not that set of conditions.
B
We've had folks on the show who touched upon some of these points. Yancey Strickler has done something called the Bento box. It's a 4x4, which is AM I focused on me or we on one axis? And am I focused on now or later on another axis? And then when you face any choice, you should ask yourself all four of those quadrants. And so he spoke about that, and it's a very useful frame for this. We had Fred Reichelt on the show, who is the inventor of customer loyalty and the net promoter score. And his point was, and by the way, he's been very unsatisfied with how the number net promoter, that is net promoter score has been used. And the reason is people use it to force behavior from employees as opposed to really understanding what customers want. And his point is all about what feeling should we have toward customers? And he says the feeling we should have toward customers is love. And so you see some of these things bubbling up. And Reich held, as somebody who really tried to operationalize love as something that would manifest to some extent in people, but more so in the system of the business. The system of the business should care for the well being of the customer. And so I will tell you though, that. And also he would point out that all evidence suggests that companies that can manifest love for their customers do wildly better than others. I will tell you where this leads me, which is one of the big challenges, I think, of creating work that people love is that we're arguing that it's a design practice, which is that work is a design problem. That if we designed it better, we could create work that actually people really found rewarding. But the language, the standard of evidence in the design world is a fundamentally different standard of evidence than in the finance world. And so there's this distance between designers who are saying, we're going to figure out what people really want and we're going to try to give it to them. And that's a very holistic decision, which is we have to tie a lot of things together to give you what you really want. And then the finance department, which is very dependent upon rational choice theory. And so it's hard to find the conversion kit between those two different standards of evidence.
A
That's right. And I think, at least from my outsider perspective on this, the best you can do is present evidence that when you create an environment where the workforce is in love with its customers, you improve roi. Now, you can say to the finance guys, I don't operate in your world, but here, let me give you some information that you're comfortable dealing with in your world. And this is the best way to be profitable. The way you do well is by doing good. And you know, I wrote a book about work some years ago, I think we talked about it the last time I was on the show and trying to talk about how people want and get meaning from work. And I tried to suggest in that book that you can get meaning from even pretty menial work. If you're in retail and people come into your store and your attitude is everybody who walks in the door is an opportunity for me to make a sale, that's one thing. If, however, your attitude is everybody who comes into the store has a problem and it's a problem I can help them solve, at the end of the day, you have made the lives of 10, 20, 30, 50 people marginally better by solving their problems. Because you know what you've got better than they know. And you know what deals with the problem they're trying to solve better than they do. So the aim is not to sell as much shit as you possibly can. The aim is to improve as many lives as you possibly can. Well, that changes the attitude you have when you open up the shop in the morning. Knowing at the end of the day that 50 people will be living better lives than they were before they ran into you is a huge thing. And it will almost certainly redound to the financial benefit of the enterprise if people have that attitude. And it will certainly get them much more eager to get out of bed and go to work every day. So I don't think you have to be doing brain surgery to get meaning out of the work that you do. But I'm pretty sure that maximizing ROI is not the way to get satisfaction out of the work that you do.
B
Especially, by the way, ROI for somebody else.
A
Especially ROI for somebody else.
B
So before we close, how's your work?
A
I love it. You know, I've been in this incredibly privileged position for my entire career of essentially never having to do anything that I didn't want to do. I taught in a place where I was free to construct the courses as I saw fit. Students wanted to be in the classroom, and I never, ever forgot how fortunate my situation was, even compared to other academics where students don't want to be there and they're just out to get a job and they retreat every assignment as an imposition. I never experienced that, not for a day. And there aren't many people who can say that.
B
And you're writing books with friends you made? Yes.
A
This book. Choose wisely. He and I have been the closest of friends for 55 years, even though he's a philosopher and I'm a psychologist. And yes, the philosopher and the psychologist walked into the bar, metaphorically. It was actually a zoom bar.
B
Yes. Where can people learn more about you and your work?
A
They just Google me. I don't have much of a social media presence, but if they Google my name, they will find more of me than they ever want to see. All my books are available on Amazon, and I hope people find our conversation somewhat helpful.
B
I will tell you, reading the book, I found it incredibly helpful, which is that those are issues that I've struggled with as a decision maker, as a leader in companies my whole career, and I stubbed my toe on enough of them to have the shape of the problem, but not enough to be able to articulate it. In a way that I might be able to resolve it. So it was really really delightful book. I recommend it to everybody who is a decision maker. It's called Choose Wisely and so thank you very much for joining us on the show today.
A
It was a total pleasure. Dart it always is a pleasure to talk to.
B
Thanks for joining me for another episode of Work for Humans. If you enjoyed this episode, please give us a five star rating. Wherever you listen to podcasts and share the show with one person you think would get value from it, believe it or not, this really helps us grow the show and reach more people who want to build the kind of work that people really want. As always, thank you to my producer Jason Ames at 9th Path Audio for his insights into content and his high standard for quality. Final the opinions shared here are my own and not the views of Google or Cisco Systems. Thanks again for listening. See you next time.
Work For Humans
Episode: What Does It Mean to Be Rational at Work?
Guest: Barry Schwartz
Host: Dart Lindsley
Air Date: April 21, 2026
This episode features psychologist and author Barry Schwartz, best known for "The Paradox of Choice," discussing the limits of rational choice theory (RCT) in work and life. Schwartz and host Dart Lindsley explore how RCT—while prevalent in economics, business, and decision-making—often fails to account for the true complexity, moral dimensions, and contextual realities of real-life choices. They discuss alternatives that involve judgment, wisdom, holistic thinking, and design approaches for more meaningful work.
Barry Schwartz urges leaders to see work and decision making as inherently narrative, social, and value-laden processes. Rational choice theory is only one tool among many and must be used with caution and humility. More meaningful work—both for employees and organizations—can be designed with greater attention to wisdom, judgment, and the holistic sweep of a life or mission, rather than narrow, quantified maximization.
Recommended for:
Anyone in leadership, organizational design, or seeking to make wiser, more humane decisions at work or in life.
Notable Book Mentioned:
Choose Wisely by Barry Schwartz (co-authored with a philosopher, reflecting the importance of integrating perspectives)