
Loading summary
Sean Carroll
Human productivity requires cooperation between people with different perspectives and different talents. That's why hiring the right people quickly is so important. When it comes to hiring, Indeed is all you need Stop struggling to get your job posts seen on other job sites. Indeed Sponsored Jobs helps you stand out and hire fast. With Sponsored Jobs, your post jumps to the top of the page for your relevant candidates so you can reach the people you want faster and it makes a huge difference. According to Indeed data, Sponsored Jobs posted directly on indeed have 45% more applications than non Sponsored Jobs and there's no need to wait any longer. Speed up your hiring right now with Indeed. Listeners of this show will get a $75 sponsored job credit to get your jobs more visibility@ Indeed.com mindscape go to Indeed.com mindscape right now and support our show by saying you heard about Indeed on this podcast. Indeed indeed.com mindscape terms and conditions apply. Hiring Indeed is all you need.
Rakuten Advertiser
If you're shopping while working, eating, or even listening to this podcast, then you know and love the thrill of a deal. But are you getting the deal and cash back? Rakuten shoppers? Do they get the brands they love savings and cash back, and you can get it too. Start getting cash back at your favorite stores like Target, Sephora and even Expedia. Stack sales on top of cash back and feel what it's like to know you're maximizing the savings. It's easy to use and you get your cash back sent to you through PayPal or check. The idea is simple. Stores pay Rakuten for sending them shoppers and Rakuten shares the money with you as cash back. Download the free Rakuten app or go to rakuten.com to start saving today. It's the most rewarding way to shop. That's R a K u t e.
Mathis Home Advertiser
N rakuten.com @mathis home, we understand that your home is a reflection of you, and during our Spring Style event, we have fresh new looks that are waiting to make your home look and feel great at the lowest price. With all the latest styles in bloom, you'll find savings in every department. Whether it's a clean and modern sectional or a rustic and traditional dining table, we have it in stock with fast and free delivery. Come find the look that's right for you during the Spring Style event at your furniture superstore. Mathis Home.
Sean Carroll
Hello everyone and welcome to the Mindscape Podcast.
I'm your host, Sean Carroll.
A few years ago, back when we still lived in la, we had a.
Summer project My wife, Jennifer, and I.
We took boating lessons, climbing on a powerboat back in Marina del Rey and.
Spend a few hours tootling around learning to, like, park the boat, tie it to the dock, all these things. I've forgotten everything by now. I don't know any of the nautical.
Terms anymore, but there was a moment.
When, if there was a disaster on.
The boat, I could help you bring.
It back to shore and tie it up to the dock. One of the interesting things was, of.
Course, what you do when you're out there on the water and there's another boat that is on a collision course with you. Typically, you don't have direct communication with the other boat.
You're not on the radio. You can't just say, hey, I'm going to do this.
You need to have some rules about how to behave in such a way that the two boats don't hit each other. And there are such rules. If you're literally coming right on, then you're supposed to turn to the right. You're supposed to change speed and direction in a decisive way so the other boat can read your implicit boat language.
I guess the point is it works.
Very well, but the reason it works is not only because everyone, you know, the pilots of both boats, know the same rules, but because they know that each other knows the rules, right? So if I'm supposed to veer my boat to the right, that works because both boaters know that they're going to veer to the right, and they know the other one is going to veer to the right. So there's a coordination between them, and everyone is perfectly safe. This is an example of what philosophers and game theorists call common knowledge. So common knowledge, as we'll talk about in the podcast, it's a slightly misleading term. It doesn't just mean knowledge that lots of people have. It means knowledge that lots of people have, and they all know each other has. So there's sort of an infant regress. I know that you know it, and you know that I know you know it, and I know that you know that I know you know it, et cetera, et cetera.
And that's why philosophers love this kind of thing.
It also leads to very interesting mathematical results that you can prove in the context of Bayesian reasoning and the presence of common knowledge. There's a very famous theorem called the Auman Agreement theorem that says, roughly speaking, that if you have two perfectly rational agents with common shared priors about a set of claims that could be true.
Or false in the world, prior probabilities.
Prior credences, and they have some different data, so they reach different posterior probabilities, they've updated their credences differently, but then they talk to each other and they just tell each other what their probabilities are.
After they got all this data and.
They know that each other are perfectly rational, then they should instantly come to agreement. They should basically update their own priors.
On the basis of the fact that.
This other perfectly rational person has updated their priors. And there is a right place to come to. Instantly is an exaggeration. There's no part of the theorem says it has to be instant. There can be give and take. But you should not. According to Auman, if you're both perfectly rational and you start from the same beliefs, you should not be able to agree to disagree. You shouldn't even be able to just.
Disagree and maintain the fiction that both.
You and the other person are perfectly rational. It seems that we do this all the time, though, right?
So it's an interesting question as to.
Why people fall short of the assumptions of Auman's theorem. Anyway, this whole collection of ideas about common knowledge is the subject of the.
New book by today's podcast guest, Steven.
Pinker, who presumably needs no introduction as.
We talk at the very beginning of.
The podcast, part of his overall project of better understanding human behavior. You know, humans, bless their hearts, are not perfectly rational creatures, but what are exactly the ways in which they fail to be rational? And especially, I'm becoming increasingly impressed with the importance of the social aspect of how we both really do think rationally, but also how we fall short of being rational. It's dealing with other people that a lot of our both pros and cons come into play as thinking cognitive creatures.
So this is an exploration of one aspect of that. Let's go. Steven Pinker, welcome to the Mindscape podcast.
Steven Pinker
Thank you.
Sean Carroll
You have a book coming out called When Everyone Knows that Everyone Knows About Common Knowledge. And it's a great topic, I think, but it's a little, I guess, not what I would have expected for your next book to be. So I get it now that I've looked at the book, why you did it, but. So let's start by putting it into the context of a bigger project. I mean, do you think of yourself as having a big project with all of your technical work and also your books?
Steven Pinker
I do. I'm interested in human nature, what makes us tick, and all of the implications of how we understand human nature. I'm trained as a cognitive psychologist and so the subject matter is how people think. And so how people think about how people think about how people think is in some ways a natural extension. And it's also an extension that came about in particular through my work on my interest in language, where one of the basic facts about language, it's known in linguistics for many decades, is that even after we've worked out what all the rules of grammar are, what all the meanings of words are, and their could be an algorithm that could deduce the meaning of a sentence from the meaning of its parts and how they're arranged according to these grammatical algorithms, in practice, people don't mean what they say. They beat around the bush. They use euphemism. They use innuendo. If you could pass the salt, that would be awesome. The meaning of that is not, if you could pass the salt, that would be awesome. The meaning is, give me the salt, or, Nice story you got there. Would be a real shame if something happened to it. Do you want to come up and see my etchings, Geoffice there? Is there some way we could settle this ticket here without going to court and doing all that paperwork? We're counting on you to show leadership in our campaign for the future. I don't know, you've probably heard that in fundraising dinners. So all of these examples, I mean, one of the reasons that it took so long to have AI understand language is that if you simply give it the algorithms for figuring out who did what to whom based on the rules of grammar and the meanings of words, it will misjudge people's intentions. You know, if you say to a chatbot, can you tell me how to get to Harvard Square from here? You know, literally, you would say, yes, I can tell you how to get to Harvard Square from here. But that's not what the user wants. He just wants to just give the answer. So all. Anyway, the puzzle that I raised in a previous book, the Stuff of Thought, Language is a window into human nature. It had a chapter on, I call the Games People Play. That is the all of the rituals that we go through to avoid saying exactly what we mean in so many words. The solution that I proposed there, and which I then built on in my own empirical research in cognitive and social psychology, and then in a chapter called Weasel Words in the new book. But the idea is that, what's the difference between an innuendo that everyone understands and blurting it out? And I say the difference is generating common knowledge. That is, if he says to her, do you want to come up for Coffee. And she says, no, you know, she's a grown woman, she knows what, would you like to come up for coffee? Means there's no plausible deniability of the intention. And you know, he's a grown up, he knows what she just turned down. But does he know that she knows that he knows? I mean, he could still think, well, maybe she thinks I'm dense. And she could think, well, maybe he thinks I'm naive. And so even though there's no plausible deniability of the message, there's plausible deniability of common knowledge of the message. And with the additional claim that our social relationships are ratified by common knowledge, that is, two people are friends and each one knows that the other one knows that they're friends. Two people are lovers, two people are in a position of authority and deference. Two people are transaction partners. All of these everyday social relationships exist because each party knows that the other one knows that they exist. We often try to avoid common knowledge in order to preserve the relationship that we have. We don't threaten it, but we want to get the message across anyway. This is a long winded answer to a simple question of why did you write this book? And that was the longest. The short answer is that my interest in communication and language led to me stumbling on this very rich concept of common knowledge. It had been explored by logicians, philosophers, economists, game theorists. There was a lot in there, and so it was worth the book. So I wrote the book.
Sean Carroll
Well, and I think it's another example, just for me, as the physicist doing a podcast, of there's a message that comes across over and over again. I think we've all been told in various ways that human beings are less rational than we like to think we are. We have biases and things like that. But what I'm impressed by is how many people are telling me the ways in which human beings reason and communicate.
And talk are so very, very social.
They're not just things we would have invented if we were on a desert island all by ourselves. And common knowledge sounds like a very relevant example of that.
Steven Pinker
Well, indeed. Common knowledge, one could argue, I do argue, is the reason that we can be social in the first place, namely that common knowledge is a. It's necessary for conventions. Driving on the right or driving on the left, respecting a leader or department chairman or an expert, respecting paper currency, which, you know, what's the value on a green piece of paper? The value is that I know that other people treat as having value, which they only know because they in turn think Know that other people treat it as having value. So all of these means of being social conventions, but also, as I mentioned before, our informal social relationships and actions that we cooperate on, that where we accomplish something collectively that we couldn't accomplish individually, they depend on common knowledge, on being on the same page. And that, I suggest, is why we evolved language. Language probably co evolved with sociality. Language makes a lot of social coordination possible. Language depends on social coordination. You have to be in a cooperative relationship to exchange words in the first place.
Sean Carroll
And this reminds me of I recently did a podcast with your Harvard colleague Cass Sunstein. He wrote a book on liberalism. And you have to spend the first five minutes explaining what you mean by the phrase on liberalism and what liberalism means. So we're battering around this idea called common knowledge. But it's not simultaneous knowledge. It's a little bit deeper than that.
Steven Pinker
Yes, although simultaneous kind of announcement, revelation, event is the quickest way to generate common knowledge. Something is, and it resolves the something of a paradox that if you need common knowledge to coordinate to be on the same page. If common knowledge literally consists of the state where I know something, you know it, I know that you know it, you know that I know that you know it, which makes your head hurt. How do people have a common knowledge that they need to coordinate? The answer is that if something is public, conspicuous, self evident out there, that can generate common knowledge in a stroke. Not always, but that's the surest way to do it. In general, with words, every word is a convention. Shakespeare said a rose by any other name would smell just as sweet. But we can use the word rose to convey the concept of a rose because everyone follows that convention. We can count on it. When we learn the word rose, we don't then have to poll every person we meet as to whether they understand it the same way. That's just a tacit assumption that kids have to make in order to learn to speak and that we have to make in order to use language. Sometimes though, as in the case of what do you actually mean by liberalism? It's not foolproof, especially when it comes to either abstract esoteric concepts, concepts whose common understanding may be relative to the community in which they communicate. Cases in which the common understanding changes, language changes all the time. No one decides, no one legislates the meaning of words. It's a kind of grassroots phenomenon that where if people start interpreting a word or using a word differently, that is the meaning of the word. The meaning of the word is common knowledge of what it means in this case you didn't have common knowledge with, with Cass Sunstein and so you had to stipulate it in so many words. You asked him, hey, what do you mean by liberalism? And he said, my definition of liberalism is blah, blah, blah, blah, blah. Sometimes we have to do that. That's not the typical way in which we use words.
Sean Carroll
And you'll be unsurprised to learn that on social media, various people reacted to the podcast episode just on the basis of the title, without actually listening to the definition of what the words were.
Steven Pinker
But well, the liberalism as we know has different meanings on different sides of the Atlantic.
Sean Carroll
And yeah, he's Ronald Reagan as a liberal in his definition. So you have to explain why that is. It rubs people the wrong way. But I guess, I mean, maybe simultaneous was the wrong word. I'm just trying to highlight the definition so that we're super duper clear for the audience. It's not about everyone knowing something. It's about knowing that everyone knows something, etc.
Steven Pinker
That's what the book is about, about that difference. That is universal private knowledge is not the same as common knowledge, at least in this technical sense of common knowledge. Now you and I, right now, as with you and Cass, have to clarify what this specialized meaning of the word refers to. Because when I use common knowledge, I didn't invent this usage. In fact, I don't even like this usage. But I'm kind of stuck with it. In the technical sense in which philosophers, logicians, game theorists, economists use the term, it refers to the case where not everyone knows something, but everyone knows that everyone knows that everyone knows that everyone knows it.
Sean Carroll
And so let's get into the psychological aspects or cognitive science aspects of this. This is, I guess, your home turf. Like how do we know that some knowledge that we have is common knowledge? I mean, both sort of informally and.
Rigorously, is it even possible to know.
All those levels of I know that they know that they know that I know.
Steven Pinker
Well, I have a chapter in the book on that very topic called Reading the Mind of a Mind Reader. And as I hinted at earlier in our conversation, most of the time the common knowledge is granted by a conspicuous or self evident event, something that happens within, in a public place where you not only see it, but you see everyone else seeing it and they can see you seeing it, or something that's blurted out within earshot of everyone else. That's something as obvious, conspicuous. That's the typical route to common knowledge. We can in some circumstances engage in the process that I call Recursive mentalizing. Where to mentalize means to get inside someone's head. To recursively mentalize means get inside the head of someone who's trying to get inside your head or someone else's head. So sometimes you think about, oh, my goodness, he's probably thinking that. He's probably thinking, carry that to the limit. And we got common knowledge. So an example would be, say, a rumor that a bank might be in financial trouble. And so you think, well, gee, if I had reason to think that probably other people do, and they probably are thinking that other people do, and they're going to withdraw their money because they're afraid that other people withdraw their money, if only out of fear that still other people will withdraw the money. I better withdraw my money while there's still money to withdraw, because the bank can't cover the deposits of everyone all at the same time. And so you get a bank run. And the bank run didn't begin with a conspicuous signal. The bank is experiencing a run, the bank is in trouble. But it comes from an interplay between some bit of news that leaks out that you then start to extrapolate to what other people might think. Probably a better example, more everyday example, because bank runs don't happen very often anymore. There was one a few years ago at Silicon Valley bank that got a lot of attention. By the way, the reason that banks don't suffer from runs anymore is that to solve the problem of bank runs, which are generated by common expectation, that is people worried about what other people worrying about other people are worried. Roosevelt, in the midst of the Great Depression, triggered by a cascading series of bank runs, first had a bank holiday where no one could withdraw anything. That was kind of a nuisance, but it was really a good thing because you didn't have to worry about other people withdrawing their money. And then Federal deposit insurance, where a bank has a big gold seal emblazoned on their window that says our deposits are insured. The purpose of that seal is not just to reassure people that their deposits are insured, but to reassure them that other people know that they're insured. So it's less likely that the bank will. Will fail. Before there was deposit insurance, Roosevelt's solution to the problem of bank runs. Banks would often flaunt their assets with conspicuous opulence. Even small towns. The banks were made of marble. They had gold lettering. They had spacious lobbies. This was kind of considered an insult to many working people. There's an old folk song from the weavers. Sardonically saying singing the banks are made of marble. But it wasn't. The banks weren't just showing off to kind of flaunt their wealth and insult everyday miners and farmers. They were trying to generate the common knowledge that we have enough assets that you don't have to worry about your deposits evaporating because everyone else withdraws money before you do. Anyway, a more common this is a bit of a digression why in general we don't have that many bank runs anymore, but we do have hoarding, such as during COVID when people hoarded toilet paper because they thought there'd be a shortage of toilet paper, which they then caused by hoarding the toilet paper even though there hadn't been a shortage in the first place. It's another case of common expectation where there we really do engage in recursive mentalizing. No one ever said go out and buy toilet paper, it's in short supply. People just had to think in their mind's eye of other people grabbing toilet paper because they were worried about it. And then that snowballed into the common knowledge that there's a shortage.
Sean Carroll
And I guess maybe this is too simplistic, but I'm guessing that all sorts of financial bubbles follow a similar pattern. Maybe NFTs, non fungible tokens relied on the common knowledge that these would still be valuable to people someday that then went away.
Steven Pinker
That's right, technically common expectation because anything to know. But yes, it's the same phenomenon. Indeed. In fact that's what bubbles are and, and, and runs and crashes and panics. And the John Maynard Keynes had an explanation of all these phenomena in finance and economics that can't be explained by the standard rational actor models of supply, demand, investment and so on. So he likened speculative investing, which is what generates these bubbles and crashes where you don't buy something based on the underlying value of the asset. Like someone built a factory, the factory is going to produce so many widgets per year, they're going to make so much of a profit per widget, I'm going to get my share of those profits. I mean that's kind of the way stock markets ought to work rationally, but we know that they don't. And the reason is Keynes imagined ask people to imagine a beauty contest. He actually claimed that it ran in the British papers at the time, which is dubious. But the object is not to pick the prettiest face like we used to have Ms. Rheingold, probably before any most of us were born. A beer ad where there were six models and you had to pick the prettiest. No, in this contest, you had to pick the face that the most other people picked as the prettiest, knowing that they were picking a face, trying to outguess everyone else picking it. And he said that that would also often involve. He didn't use the word recursive mentalizing, but that's what he was describing. He said that would often involve the second, third and fourth order of anticipation. Of anticipation of anticipation. And mathematically that can lead to runaway behavior when people want to be in on an appreciating stock, which makes the stock appreciate, which makes more people want to be in. Until sometimes this is called the greater fool strategy of investing. That is, you buy something on the hopes that you can sell it at a profit to someone else. Why would anyone else buy it at more than you paid for it? Well, they're hoping that they can sell it to someone else and even more than they paid for it. But soon enough, the market runs out of greater fools or whatever rumor common knowledge generating salient event triggered the bubble might be contradicted by now a bit of news that causes reverberant fear, fear about fear about fear, and then the bubble can pop. So a lot of the phenomena in finance that don't depend on fundamentals, the irrational exuberance, as Alan Greenspan put it, the crashes, the runs for the exits, are phenomena of common expectation.
Sean Carroll
And I don't want the common knowledge to get a bad reputation here from all these examples of financial ruin in our future. But there's all these, I don't know what to call them, puzzles, logic games, examples in your book and elsewhere that I've seen that try to illustrate this phenomenon of common knowledge. And I'll confess, despite being pretty good at math and logic, these are almost never illuminating to me. I think the one that came closest was you had a cartoon in the book of three logicians walking into a bar. Is that something that is explicable in real time?
Steven Pinker
Yes. So that doesn't literally involve common knowledge, but it does involve recursive mentalizing, that is thinking what other people think. So as I recall the cartoon, the caption is three logicians in a bar. And the waitress comes over and says, does everyone want beer? And the first one says, I don't know. The second one says, I don't know. The third one says, yes. So that's a logic puzzle and you can figure it out. If everyone wants beer, is true, if each one of them wants beer, would be false. If anyone didn't want beer. So if the first one says, I don't know, she must want beer, because if she didn't want beer, then she would deduce that everyone wants beer is false. The fact that she didn't say it's false meant that she did want beer. Second one goes through the same logic, the third one knowing that the first one didn't know and the second one didn't.
Sean Carroll
No.
Steven Pinker
Now that means that the first one wants beer, the second one wants beer. The third one, if she does want beer, then she can say yes. So she figured it out. She. She figured out a state of affairs from the epistemic or cognitive state of the other characters in the bar. So that's a pretty easy. You could. It's intuitive. I mean, you might have to think, think through it for another couple of seconds. It is similar, not isomorphic, but the way of solving it is similar to what's been called the world's hardest logic puzzle. And this one really is counterintuitive until it's explained and then, you know, it really does make sense. But it's, and it goes by various names. The muddy children problem, the barbecue sauce problem. I describe it in terms of a bunch of academics at a conference, some of whom have spinach in their teeth, but they. No one knows who. And they deduce it from. In this case, they really do deduce it from common knowledge. I mean, I can run through it if you.
Sean Carroll
Yeah, I'm torn. Why don't you run through it just to illustrate either to the audience that it's harder than it looks, or they're much smarter than me. I'm happy because I can get it if I sit down with a piece of paper and think about it, but it doesn't really illuminate right away way to me.
Steven Pinker
Same here. So here's the problem. So you've got a bunch of psychologists or academics at a conference in the dining room. Some of them have spinach in their teeth, but there aren't any mirrors around. No one wants to pick their teeth clean if they don't have spinach in their teeth. And everyone's too polite to point out that someone else has spinach in their teeth, but the department chair who's presiding over the meeting can't stand it any longer. And at the front, she gets up and she says, at least one of you has spinach in your teeth. Every time I clink the glass, that's an opportunity for you to clean your teeth. Now she clinks the glass once. No one Moves. She clinks the glass twice. No one moves. She clinks the glass a third time. Three people in the room who have spinach in their teeth all clean their teeth. So they didn't do it the first time, they did it the second, third time. They all. They all did. How do they. How do they know? Again, there are no mirrors and no one's telling anyone else. And here, here's the explanation. It's a question of kind of mathematical recursion. That is, if you see the logic for 1 and then for 2, then you can then extrapolate and say, well, it applies to any number. So here's the way it works. With one, with one, it's really easy. Okay, let's say just the state of affairs. The ground truth is that one academic has spinach in his teeth. The department chair says, @ least one of you has spinach in your teeth. When I click the glass, you can clean it. So everyone looks around. The guy with spinach in his teeth looks around at no one else has spinach in his teeth. If everyone. If someone has spinach in his teeth, he knows it has to be him. That's easy. That's kind of obvious. Now let's go to the case where two people have spinach in their teeth. So again, department chairman makes the same announcement. Everyone looks around. So the person with a spinach in his teeth sees someone else with spinach in his teeth. He still doesn't know about his own teeth because all she said was at least one of you. So he doesn't know whether to clean his teeth or not. Now she seeing the same, pretty much the same thing he does also doesn't know whether to clean her teeth because she doesn't know whether he's the only one. So, second clink of the glass. So the first clink of glass, they don't do anything. Now she clinks the glass a second time. Now each one can think, well, geez, if she was the only one, then on the first clink of the glass, she would know to clean her teeth. Because she would look around and see everyone else's teeth are clean. She knows she has to be the one. She didn't. So therefore she must have seen someone with spinach in their teeth. I'm looking around. No one else but her has it. It must be me. She thinks the same thing. And so on the second clink, they both know that they have to clean their teeth. That's the logic. If you accept that, then you also realize that three people spinach on their teeth will clean it on the third clink. If the room has 100 people and 17 have spinach in their teeth, and assuming they're logicians, then they'll all clean their teeth on the 17th clink. But that crucially depends on common knowledge. It wouldn't work if the department chair went over and whispered something in everyone's ear. And if you didn't know that everyone else knew, then the fact that the woman with a spinach in her teeth didn't clean her teeth would not convey the information that she saw someone else with spinach in their teeth. So it crucially depends on common knowledge. That's the world's hardest logic puzzle. Allegedly.
Sean Carroll
It's pretty hard. Maybe not the world's hardest, but it's unrealistic to think that having been in faculty meetings, that all the rest of the faculty would be perfect logicians in quite that way. But it is realistic to think that the department chair would be annoying enough not to just say three people have spinach in their teeth at the start.
Steven Pinker
And you could just count, which would.
Sean Carroll
Have made it a lot simpler.
Steven Pinker
But okay, so it works better if the academics are logicians.
Sean Carroll
Yeah, very, very much, yes. And they've probably heard puzzles like this before. I mean, there's a similar counterintuitive result that maybe is a little bit more profound but is really at the heart of this whole game. Elman's agreement theorem, which is one of these things, that very trivial kind of thing to prove conclusion that makes you think that can't possibly be right. So why don't you explain that to us?
Steven Pinker
Yes. So this is a theorem that reasonable people cannot agree to disagree. I mean, that's. Well, so that's not exactly right. Anyway, so they're improved by an Israeli mathematician, Robert Alvin. He won a Nobel Prize. Not for this. I guess that happens with Nobel prize winners. There any ideas that once they don't win Nobel prizes for are ingenious. This is a very simple. The whole paper is three, three pages and he, he says it's. The idea is simple, but it's not intuitive. Understatement. So here's the theorem. Two rational agents with the same priors in the Bayesian sense of their credence in a hypothesis before they've even looked at the evidence that is based on their entire understanding of the world, everything that they've discovered so far. Okay. Who then share their. Make their posteriors common knowledge. That is, after looking at the evidence, each one can look at different evidence. They don't have to see each other's evidence. They just announce, my estimate is 0.7, that this hypothesis is true. And the other one announces her posteriors. Those posteriors must be the same. That is, it is they cannot agree to disagree. Now, what's surprising about it now there's a less surprising version, which is if they're completely rational, if each of them shares the evidence that that motivated their posterior, their conclusion, then you might say, well, you know, if he's rational, I gotta trust him, and there's no reason that I shouldn't take his evidence seriously just because it's his evidence and it's not my evidence. You know, evidence is evidence and it's not about me, it's not about him. The. So that would be a little more intuitive if you swallow the assumptions that they share the same priors and they're both completely rational. The kind of surprise in it is they don't have to actually share their evidence, they just have to share their posterior, I.e. their assessment of what the evidence means and. And that those posteriors must be the same. Now, one way to think about it, this isn't actually how he, how the theorem goes, but it was worked out by later logicians, is you can imagine one of them announcing her posterior, that is, I think 0.7 to the hypothesis is true. The other one announces his posterior. I actually only think there's a 0.4 degree of confidence that it's true. Then the first one will say, oh, well, if you, if you say it's 0.4 and I say it's 0.7, I'm going to now update my posterior. Here's my new posterior. And then he updates his and they end up in the same place. So the idea is they have to end up in the same place if they're both rational, even if neither one gives the basis for the conclusion. The surprise being that if they do it that way, they don't gradually converge and meet somewhere in the middle, which is kind of how we expect arguments to go. Their positions are random walks that end up in the same place, but could go every which way. They could leapfrog each other, they could outdo each other. They could go from a moderate position to an extreme position until the final step in which they end up at the same conclusion. It's a little bit like the, the spinach in the teeth problem in that, you know, it's only on the third clink that suddenly everyone comes to the same realization. So the. Now, this sounds kind of, you know, absurd, isn't it good to agree to disagree. And when people in a rational argument don't, you meet somewhere in the middle. But it forces us, you know, like all mathematical theorems, it's only as. As, you know, valid as its premises are, are true, which, you know, and, and sharing priors itself raises a whole bunch of questions which I. Yes, yeah, but, you know, they're actually the reason that I discuss this, the reason I discussed the, the, the spinach and the teeth problem, even though there are sort of esoteric mathematical problems. So I think they do have implications. So in the case of argument, when you think about it, why should two people meet in the middle? Who says that the truth has to lie halfway in between the opinions of two guys? What guarantee is it that they'd be straddling opposite sides of the truth? Likewise, why should you privilege your own assessment over anyone else's on the charitable assumption that they're as rational as you are? Now, of course, you know, I, I think I'm more rational than everyone else, but, you know, I would, wouldn't I?
Sean Carroll
Everyone else thinks that too.
Steven Pinker
Everyone else thinks that too. Right. So there really is no reason to privilege your own assessment, you know, on the assumption that other people like you are. Are rational. And a final implication is that. And, and this is, you know, a little bit fanciful, but the. A linguist, George Lakoff, and a philosopher, Mark Johnson, in a famous book, little book they published 45 years ago, called Metaphors We Live by, noted that language contains lots of metaphors that we don't even realize are metaphors, which allow us to talk about abstract concepts in concrete terms. And one of the metaphors they discuss is that argument is like war. I demolished his position. He tried to defend it, but I found the weak spot. And we used the language of war, war in talking about arguments. And just as a kind of a whimsical thought experiment, Lakoff and Johnson says, well, do we have to think about argument as war? Why don't we think of it as like a dance? And as it happens, the sequence of reaching agreement in Almond's construction is in some ways more like a dance than like a battle. That is you. It's a random walk. And so you could lurch and weave and bob all over the place before arriving at agreement. So this esoteric mathematical theorem might actually have some insight. And again, just to tie it to implications, we ought to draw, you know. You know, and I know that probably a lot of arguments among academics, among politicians are kind of, you know, pissing contests. It's like who's going to win. Often people use dirty debating tricks. They set up a straw man. They look for a loophole that the person just neglected to mention. It's not the best way of arriving at the truth to make it a combat sport, because the truth is the truth. It doesn't care about your ego if all you're doing is trying to win, that's not the same as trying to get to the truth. And so Almond's theorem, in some ways it's an exercise in humility, in epistemic humility, and might press back against the bad habit that we have of seeing an argument as something that we want to win.
Sean Carroll
Well, yeah, it's an ideal theory kind of thing. Right. It's not a prescriptive or. Sorry, it's not a descriptive kind of thing. It's more like, this is what we should aim for.
Steven Pinker
Exactly. And that is exactly the way I present it.
Sean Carroll
And just to make it the assumption super clear, because like you say, the theorem is as good as its assumptions. And the conclusion that two rational people can't disagree, can't agree to disagree, is.
Steven Pinker
If they have the same priors, if.
Sean Carroll
They have the same priors, if they're both perfectly logical, and I think if they both agree that each other are perfectly logical, that's common knowledge, right?
Steven Pinker
That is totally right. That not only do the posteriors have to be common knowledge, but each other's rationality has to be common knowledge. You are right. Which, by the way, also applies to the spinach in the teeth problem.
Sean Carroll
Right.
Steven Pinker
That is, rationality has to be common knowledge. And in general, in game theory, almost everything depends on a background assumption that the parties are rational and that their rationality is common knowledge. I mean, that's how you psych out the other person. You assume that they're rational.
Sean Carroll
And I guess that my. Without any data, and maybe you have some data to share with us, but I'm guessing that the fact that people do disagree is sort of half because they have different priors and half because they're just not convinced the other person's being rational.
Steven Pinker
Yes, I think that's. That's a large part of it. And I discuss a paper by Tyler Cowan and Robin Hansen where they look at the contrast between ideal argumentation and real argumentation and ask, do you know, why don't we behave like the rational agents in Almonds Theorem? And they suggest that people are. They say something that's kind of a common place to any social psychologist, which is that people are kind of dishonest. In the sense that they don't approve of other people bending the evidence in their favor, setting up straw men, but they do it themselves.
Sean Carroll
And is there a feeling that given this theorem, it's. Et cetera, that maybe this can inspire us to take more seriously the opinions of others? I'm sort of thinking of there's a common move you get in the media or social media where people will say, oh, look, I said this, and everyone disagrees with me. I must be onto something which is sort of the opposite of what Elman would have us believe.
Steven Pinker
Well, it's the opposite of. Yeah, of a lot of Bayesian thinking in general. And actually, I think, by the way, this is something that I worked out in a, in my book Rationality, that I think that the bias in, in science journalism, but probably in science itself, to favor the paradigm, threatening discovery, going against conventional wisdom, overturning the consensus, the rebel, the upstart is probably responsible for a lot of error because what it does is. And you know, science magazines love this stuff. The clickbait is, you know, was Darwin wrong? Was Einstein wrong? And the, the reason that it's a recipe for error is that it's, it's very unbasing. It's throwing out the priors. It's. It's treating the latest little tidbit of evidence as if it was reason to change your entire understanding. Whereas if there was some reason for a consensus, for the textbook view, sometimes denigrated as the dogma.
Sean Carroll
Right.
Steven Pinker
Well, you know, probably a lot went into that. That's your prior. Maybe if there's a contradictory bit of evidence, you should update and decrement your confidence a little bit. But you shouldn't throw everything out the window and just assume that the result of the experiment announced this morning is the truth.
Sean Carroll
Right.
Steven Pinker
I think that's one of the reasons why we've had a replicability crisis, that the journals themselves, but also the science journalism gives undue weight to the particular discovery and downplays the prior. There's one physicist, I'd never heard of him maybe, I think his name is Zeeman, who said that you might disagree with this. This may be a bit of an exaggeration. He says 90% of what's in the journals is false. 90% of what's in the textbooks is true.
Sean Carroll
I believe that. That's, you know, I get it, I get the spirit of it anyway.
Steven Pinker
And there's a famous epidemiologist, John Ioannidis, who published a kind of scandalous paper about 20 years ago. Why most Published research findings are false. And often the reason is that if you just confirm the consensus, you don't get a publication out of it.
Sean Carroll
Exactly. But I'll be honest, I've often struggled with this question of what to do with consensuses because on the one hand you make progress by showing the consensus isn't right. Right. On the other hand, the consensus is usually right. So. So like you, you don't want to, you don't want to sort of default 100% to either side. And I think that. No, no, that's right, yeah. Theorem notwithstanding, the real world implementation of it is tricky.
Steven Pinker
No, indeed. I think what we can say is that there is a widespread tendency in science journalism, but also in science journals and among scientists to over update in response to a single data point. I think that's probably a bad habit. Going back to Alman. And what does it mean? You've heard of the so called rationality movement, which is, how could that be a movement? Aren't we all supposed to be rational all the time?
Sean Carroll
The irrationality movement on the other side.
Steven Pinker
Yes. So the rationality movement, it's sort of unofficial headquarters in Berkeley, is the attempt to call attention to the fallacies and biases that cognitive psychologists and behavioral economics has documented to try to overcome them, often with Bayesian reasoning. Sometimes the rationality community is just. They're called Bayesians, but they have kind of canons of argumentative hygiene or best practices which include things like stating your degree of credence, your posterior as a quantity between 0 and 1, instead of saying, you know this is right, that's wrong, saying, I have about 0.7 confidence in this steel manning rather than straw manning your up your opponent, that is. Don't set up a straw man that's easy to knock down, but argue against the strongest version you can imagine. Engage in adversarial collaborations where you get together with your worst enemy and decide a priori what would settle the issue to both of your satisfactions and then go out and gather the data. So all of these practices, which are kind of in the spirit of almond. So much so that the conference center in Berkeley that was set up as kind of the home for rationality conferences, its main meeting room is called Almond Hall.
Sean Carroll
So we can all. No one comes out disagreeing. That's great. I'd like to do that experiment. Okay, so good. I'm glad that the audience was kind enough to go with us on this little journey of logic and formalization and Bayesian reasoning. But one of the fun parts of your book is sort of demonstrating the implications of common knowledge in our everyday life. Right, as human beings. And so you draw an interesting distinction between cooperation, which does rely on common knowledge, and coordination, which also does, but in a sort of more central bay, maybe.
Steven Pinker
Yeah. So, I mean, admittedly these are somewhat specialized usages, but cooperation, as has been discussed in evolutionary biology and to some extent in economics, in experimental economics, usually, versus the case where one person, or one animal for that matter, confers a benefit to another at a cost to itself. And, and it's a, a weighty scientific problem because one could ask, how could cooperation in particular, how could altruism, probably a better term, ever have evolved, given that, all things being equal, you'd expect that natural selection would favor selfishness. And, you know, since Richard Dawkins book the Selfish Gene, there's been a lot of discussion on how cooperation can, can evolve through reciprocity, through reputation and so on. But what I came to realize is that a lot of cases of organisms conferring benefits on one another are not altruistic, but they're in the sense that one of them incurs a cost, you know, raising this puzzle. But are mutualistic, that is, everyone wins, right? So case of a bird that picks ticks off the back of an ox, the bird doesn't have to be repaid, the ox doesn't have to pick ticks off the back of the bird, even if it could. But the bird gets a meal, the ox gets fewer pests, and everyone wins, except the ticks don't get a boat. And so these are cases that biologists call mutualism. And a lot of human working together is not altruistic, but it's mutualistic. Everyone, both, both parties win. You know, a potluck dinner, you bring complimentary courses. You're not doing someone a favor by not bringing dessert while they bring dessert. You know, both of you end up better, better off that way. Two people meet for a, a coffee date. It's important that they both pick the same place. But no, neither one is doing the other a favor by going to that place. They both want to end up in the same place. The reason that the mutualistic coordination is also a scientifically interesting problem is not because of the danger of being exploited, as in altruistic cooperation, where someone, I keep getting all the goodies, but I never repay them when my turn comes. The problem in coordination is one of knowledge, namely, how do you end up on the same page, given that it isn't enough to know what the other guy's going to do. You have to know the other guy knows what you're going to do, and so on and so on and so on. So the problem of coordination, the logical problem of coordination, requires common knowledge as its solution, in turn, raising the question for a psychologist, how do people attain common knowledge?
Sean Carroll
Is. This is probably petty of me. Not even petty because I'm not my. I have no dog in the fight, as it were. But as soon as you mentioned Richard Dawkins and the Selfish Gene, I thought of the. A few years ago there were these debates about the origin and explanations for altruism, et cetera, centered on kin selection versus group selection. I'm sure you were aware of the time. They were not perfect examples of Omanian rationality at work. It gets pretty vitriolic among. Among the.
Steven Pinker
If anyone wants to dip into that, I have a. You can Google a paper that I wrote a number of years ago called the False Allure of Group Selection. Okay, so it was published on Edge and it has commentaries by some defenders of group selection. I think the whole the notion is rather confused. Perhaps I was not being as rationalistic as the rationality community would recommend. You know, I could maybe. Maybe I did not steel man my defenders. I think I did. But, you know, the thing about all these things is I'm not the one to judge. Of course, I would think that I am.
Sean Carroll
Exactly. But also, you know, to. To be fair to us as academics, we do have this practice of like writing something, publishing it, and then inviting responses, including. Including from people who disagree with us. Which is something that might be a model for elsewhere in the world.
Steven Pinker
Indeed. Which is why academic freedom, another one of my hobby horses I co founded, the Council on Academic Freedom at Harvard, is, so I would argue, indispensable. Not because academics are special, they should be allowed to do whatever they want. We're not privileged compared to anyone else. But just because without academic freedom, you can't converge on the truth. Because the process you just described, namely, you publish something, you might be right, you might be wrong, you don't know. You will only know when other people get to attack it, try to falsify it. If you disable that process by canceling or punishing someone for what they believe, you're never going to find out what's true or false.
Sean Carroll
Okay, good. So let's dig into more of this cooperation coordination question. I mean, there does seem to be, or maybe I'm perceiving where it's not there, but there's a chicken and egg or apples and oranges problem.
Like how do we all know that.
We have the common knowledge to drive on the correct side of the road, et cetera? Is this something that is a capacity that human beings have? Is it different among different species?
Steven Pinker
Yeah, I think we do have. And other species do have coordination problems that they have to solve by. Not by common knowledge, because other species, most of them aren't very bright. But, but, but through a similar mechanism, namely a conspicuous public event. So the, you know, I mentioned, that's the typical way in which we humans gather common knowledge, which we use in all kinds of ways to coordinate evolutionarily unprepared, like, you know, like, like money or organizations and institutions. But even an organism as simple as coral, which doesn't even have a brain to think, to have thoughts, let alone thoughts about thoughts, but they face a coordination problem because they're stuck to the ocean floor. They got to reproduce. They can't go out on dates. They can't even have intercourse because each one of them, they're sessile. They're stacked on the floor. What do they do? Well, they spew gametes, eggs, and sperm into the ocean, but, you know, kind of in the hope that they'll meet up with their counterpart from some other coral. But the problem is they can't spew out eggs and sperm 24 7. It's kind of metabolically expensive. It's in all of their interests to kind of somehow agree in scare quotes on what day to do it. Now, you know, they can't talk, they can't think. They have to kind of tacitly agree or behave as if they agree. And the way they do it is they use the full moon as the common knowledge generator. A fixed number of days after the full moon differs for different species. They engage in what marine biologists call the Great Barrier Reef annual sex festival, namely five days after the full moon, they all spew. And so the egg and the sperm find each other. And it's, you know, they don't literally have common knowledge, but they solve a coordination problem by a public conspicuous event.
Sean Carroll
So that. That absolutely makes me think of an analogy I just thought of that would be completely useless to everyone but myself, but I feel the need to give it anyway, which is the horizon problem in inflationary cosmology. When we look at different directions of the sky and see the relic microwave background radiation from the big bang, they're the same temperature, even though they were never in causal contact with each other. You know, in the traditional cosmology, how did they know to be at the same temperature? Even though the temperature changes with time. And the inflationary universe scenario is a big common event that actually sort of tells them to set their clocks in the same way, and therefore they can be more or less the same temperature.
Steven Pinker
Temperature, interestingly. So they're too far away to exchange information. They're faster than the speed of light.
Sean Carroll
That's right.
Steven Pinker
But rewind the clock and there's a point at which they were cheek by.
Sean Carroll
Jowl when you introduced a phase of inflationary expansion at early times. Now they were in causal contact with each other and this tiny little patch of space expands to put them so far apart it looks like they were never talking to you each other. But in fact, there was a secret communication.
Steven Pinker
Interesting. Interesting. At least a secret interaction.
Sean Carroll
Yeah. Not going to help when you're out there on the street trying to explain these things, but that's okay. But it got very interesting in the book, which I do recommend to people.
It sort of goes both ways, this.
Coordination problem, like you already hinted at this earlier on, but sometimes we are abetted by taking advantage of common knowledge and everyone drives on the right side of the road. Other times we sort of don't want there to be common knowledge. Or we speak in intentionally elliptical ways so we can have some plausible deniability.
Steven Pinker
Exactly. And that's in a chapter of the book called Weasel Words. I discuss why we so often speak in euphemism and innuendo and. And hints. And also why even in the nonverbal equivalent, we avoid eye contact. For example, I have another chapter called laughing, crying, blushing, staring, glaring on non verbal displays that I argue are common knowledge generators. So eye contact, you're looking at the part of the person that's looking at the part of you that's looking at the part of them, et cetera, blushing, you feel the heat inside your cheeks. At the same time, as you know, other people can see the reddening on the surface of your cheeks and they know that you know that you're blushing. Laughter, your speech is interrupt, your breathing is interrupted at the same time as other people can hear the staccato sounds of laughter. So all of these are common knowledge generators that sometimes we try to avoid. We stifle a laugh, we choke back a tear, we avoid eye contact. Hence sayings like, can you look me in the eye and say that? When someone is trying to avoid common knowledge and you're trying to generate it, is it?
Sean Carroll
I. I truly don't know. The answer to this are the rules and implications of things like eye contact and blushing. Universal among cultures, probably like a lot.
Steven Pinker
Of universals, they're kind of statistically universal and not 100%. Yeah, depending on the context, with some exceptions and some parametric variation. But probably eye contact, for example, as a potent signal, often a signal of threat, I suspect is universal. It certainly operates in other primates, but.
Sean Carroll
It'S also a signal of like romantic interest.
Steven Pinker
Yeah. So we humans just kind of take what we evolved with and we repurpose, we repair it. So eye contact, which in other primates is generally a threat signal, the dominant stares at the subordinate who looks away. If their eyes lock, there's going to be a fight. And that's also true of humans, as in the barroom taunt, you looking at me? Or the ultimate fight club stare down where the two of them look into each other's eyes and see who flinches. But in humans, you know, as in the, you know, can you look me in the eye and say that eye contact is more general. It's a signal that what has so far been private knowledge between us is heretofore common knowledge. And one of the most common examples is flirtation. In flirtation, as with the dominant staring at the subordinate who looks away, the flirter looks at the flirty who then kind of looks away, keeping it a level of flirtation. If their eyes lock, that often means that they are, you know, there's something serious is going to happen. My late colleague Irv devore, a biological anthropologist at Harvard, used to tell his class, if two people anywhere on Earth look into each other's eyes for more than six seconds, then either they're going to have sex or one of them is going to kill the other.
Sean Carroll
Is this something that we could even raise to the level of predictive theory? Like can we think this way and make predictions for psychology experiments we haven't done yet?
Steven Pinker
Oh, yes, and I do that in the book. I've published a fair amount of experimental work testing predictions from ideas of common knowledge.
Sean Carroll
Can you give us some examples of that?
Steven Pinker
Yeah, let's see. We did a study on self conscious emotions, that is Embarrassment, shame, guilt. And the hypothesis was that what makes someone feel self conscious is not so much that some faux pas or infraction was detected, but rather that you acknowledge that it was detected.
Sean Carroll
Good.
Steven Pinker
That is the. It's a common knowledge that drives the acute embarrassment that each of you can get away with something if you don't. I mean, I'll give a kind of a rude example, but let's say you pass gas and you know, and it's audible enough that you suspect others have noticed.
Sean Carroll
Everyone knows.
Steven Pinker
However, if you were then to meet someone's gaze, that would be way worse. I mean, you could kind of look away, pretend they didn't hear it, pretend that they don't know that you noticed them hearing it. But that what is truly mortifying is the common knowledge. And so we imagine, we had people imagine themselves in various compromising circumstances and varied the levels of knowledge of does the onlooker know? Do you know the onlooker know? Does the onlooker know that you know that they know? And so on. And found that indeed the what was most mortifying was common knowledge. Then because we didn't want to just do it hypothetically with people kind of fantasy playing, we actually put them in a circumstance in which they, they could be embarrassed. Namely they had to give it. Give a karaoke song performance. In this case, we chose Adele's Rolling in the Deep, which was a soaring chorus. And they were told that their vocal stylings were being judged by a panel of fellow students and they could see their fellow students in a video feed. In reality, it was a recording, but we told them it was live. And either they thought that the panel of judges knew that they were seeing the panel of judges or that the panel of judges did not know that they, the singer, knew that they were being observed. And then we asked them, well, okay, now you've, you've sung the sorry chorus. How embarrassed did you feel? And they felt way more embarrassed if they thought that the judges knew that they knew that they were being judged by the judges. And there are everyday examples where two people, one of them might suddenly realize that the other one kind of insulted them or worked against them, and each one of them can kind of even suspect the other. But as long as they keep it, neither one says it, they can maintain their friendship and neither can feel embarrassed.
Sean Carroll
Is it common, Is it an example.
Of common knowledge that something embarrassing just.
Happened, but we're not going to acknowledge it?
Steven Pinker
So if we don't acknowledge it, I suspect, I suggest that's what keeps it out of common knowledge. That is, there could be private knowledge, there could even be reciprocal knowledge. I know that he knows, he knows that I know. But they may not be common knowledge, that is, I may not know that he knows that I know, etc. My argument is that it's the common knowledge that drives our relationships and most strongly drives our self conscious emotions, our awkwardness, our shame, our mortification, our embarrassment, our outrage.
Sean Carroll
I guess I'm Just asking. Could there be relevant examples where there is common knowledge? We all know something just happened, we all know it's embarrassing, but we socially agree not to acknowledge it.
Steven Pinker
We do. Oh, yes, that helps. That is the elephant in the room. That's the pretending to look away. Think of it as a common pretense, that is, there is some common knowledge there. The common knowledge is that we pretend as if something opposite to reality is the case. And people often I have a discussion of how someone has a speech impediment if someone is obese. You know, everyone knows it, but you try to avoid talking about it. I mean, it's a little bit odd and one could even argue that it is dysfunctional. I have a. A reproduce an interview by a woman named Lindy west who, quote, came out as fat. Now, that is. It's a little bit weird, the analogy of coming out of the clock coming out as gay because, you know, as the interviewer said, you know, no one's going to say, oh my God, dude, I can't believe you're fat. But she said the burden, you know, I appreciate people's considerateness, but you kind of. The burden of pretending that I'm not fat kind of distorts things. And I think we'd be better off if, look, I'm fat and you know that I'm fat. And let's just get it on the table, by the way. Get it on the table. Another metaphor for comment knowledge. Now, not everyone would go along with her, and I certainly would not be the first to say that one of my companions has a high body mass index, even if it were obvious to everyone. But it shows the tension between what everyone knows and what everyone knows that everyone knows. And this common pretense, this elephant in the room metaphor of pretending that something commonly pretending that something isn't true.
Sean Carroll
Yeah, I'm feeling now, you know, I don't know whether weighted down or energized by the knowledge of all these sort of conventions that we have chosen to sort of get through the day and how they can fail. I was once told, I have no idea what this is true, that the reason, one of the reasons why France and Germany always had wars with each other is because the French get insulted when you don't fill them in on everything, you know, and the Germans get insulted if you assume they don't know something and you try to tell them. So whenever they would have peace negotiations, they would end up, you know, in recriminations.
Steven Pinker
Well, interesting. So whether or not that's. That's you know, literally true. What is true is that an awful lot of wars are fought over. Saving face, losing face, honor, humiliation, including the war in Ukraine right now, you know, what is it about? It's really about Russia's desire to undo their humiliation at the hands of the West. There's the scene in Duck Soup, the Marx Brothers movie, in which Sylvania goes to war against Fredonia because Groucho Marx, the ambassador, imagines what it would be like if the other ambassador refused to shake his hand. And so.
Sean Carroll
The levels of trying to anticipate what the other person is thinking. This is something very familiar to poker players. Because you have to say, I think this person thinks that, I think this, that I think this. And I presume that in poker, since it's a finite state game, there's only some certain number of things that can happen. There should be some equilibrium that you eventually hit. But are there psychology studies on how good human beings are at going to the level of thinking the other person thinks something and then that they think that I think something?
Steven Pinker
So I imagine. I don't know if there. How many studies there are. But what I do know is two psychologists, one of whom was a former student of mine, have made careers in poker, each one becoming a celebrity in the process. Maria Konnikova, who was my undergraduate, former Mindscape guest. Yes, okay. At Harvard. And Annie Duke, who I knew as a student. She wasn't my student, but she originated in the same field as me, child language acquisition, before making the leap to being a card sharp. But both of whom are cognitive psychologists, gifted cognitive psychologists, and who presumably put their tacit knowledge to work. Now, poker is very interesting because, you know, we have the expression of poker face that any tell can be used against you. And that's a. And it's, you know, obviously a quintessential game theoretic situation. In fact, John von Neumann invented game theory to deal rationally with poker because it was a game of imperfect information, a game of strategy. It was not like chess, which is perfectly determinate. Poker involves bluffing and calling and so on. And it's a case in which either a poker face and in some cases, an ability to be perfectly random is an advantage, because if as soon as you deviate from being random, that is something your opponent could use against you. So it's an outguessing standoff, as in, say, in hockey. The shooter can shoot left or right, the goalie can defend left or right. If either of them has a preference, then the other one can use it to Their advantage, the optimal shooter and the optimal goalie have a mental random number generator, which is very hard for.
Sean Carroll
Human beings to do.
Steven Pinker
Which is very hard for humans to do. Yes.
Sean Carroll
Yeah, that reminds me, maybe this is a good place to sort of wind up about. And I'm not sure about whether this is a relevant example or not because I made it up rather than getting it from your book. But sort of abuses of the idea of common knowledge. And I'm thinking the example I thought of was the okay sign when people hold up their fingers to show okay and how like this has been co opted by white power groups to show that they're in that in group. But then when people say oh, you're making the white power sign, they say like, what do you mean I'm just making the okay sign.
Steven Pinker
Well, in a notorious case, an innocent truck driver got fired.
Sean Carroll
Oh, I didn't know that.
Steven Pinker
And someone caught him on cell phone video making the okay sign. This, this, this poor schlemiel didn't have a racist bone in his body. He was Hispanic and he lost his job. This is what led to Yasha Monk to. To write an article kind of at the peak of. During peak wokeness said stop firing innocent people. But yes. So just going back to common knowledge, you know, common knowledge is rel. Is. Is relative to a community of knowers. You know, you have common knowledge within some network and if you're not part of that network, what's common knowledge to all of them may not be common knowledge to you. The common knowledge may not include you.
Sean Carroll
And is it an exaggeration to think of a failure of common knowledge gets in the way of like stopping dictatorships. Like if you have a populace or even an establishment that all wants to stop somebody from doing something. Not going to mention any names, but just hypothetically, but none of them wants to be the first mover or whatever. There's a coordination problem there because a single person resisting will be stomped down, even if everyone resisting at once would.
Steven Pinker
Succeed totally big time. And this was a point made by Michael Choi in a book called Rational Ritual, kind of predecessor to mine 25 years ago or so, where he noted that public demonstrations can generate common knowledge when everyone in a public square can see everyone else and that can give them the safety in numbers to coordinate resistance, whether by storming the palace or by engaging in work stoppages. I quoted. He could have quoted, but I came across or called to mind a quote from the character Gandhi in the eponymous movie where he tells a British colonial officer in the End, you will leave. Because there is simply no way that 100,000 Englishmen can control 350 million Indians if the Indians refuse to cooperate. So that captures it. But he could have said coordinate. That is, 100,000 Englishmen can control 350 million Indians if they can control them one at a time. They just can't control them all at once. And so it can be a demonstration in a public square. It can also be a newspaper article or magazine article. That's why autocrats don't allow freedom of the press, why they have censorship and repression. The Arab Spring was kindled by social media, by Facebook and Google, until dictators kind of cottoned on to that danger and started to control the Internet.
Sean Carroll
I mean, famously, a few dozen Spaniards kind of did conquer millions of indigenous Americans back in, in the day because they were not able to coordinate in any way.
Steven Pinker
Well, yes, and they were helped along by, as Jared diamond put it, guns, germs and steel.
Sean Carroll
The germs helped, absolutely. Yeah.
Steven Pinker
The germs helped too. Yeah.
Sean Carroll
So I love the point about the demonstrations. That's an interesting point. I mean, I think of demonstrations as largely making the demonstrators feel good, like I'm in favor of them. If you feel about it, go ahead and demonstrate. But the. Just the symbolic act of letting other people know there is so much resistance out there can have helpful coordinating effects.
Steven Pinker
Yeah. That's why it'd be different than, say, a public opinion poll that showed that a majority of people are disgruntled by the regime. Or at least if it's, you know, if the opinion poll itself doesn't become common knowledge. But when, you know, so if I was a rebel and I hadn't, the results of an opinion confidential opinion poll wouldn't do me that much good. Oh, great. Everyone agrees with me. But, you know, I'm still going to get imprisoned if I say, if I. If I protest. But if everyone does it at the same time, which they will do only if they know that the others will do it at the same time. And common knowledge is necessary for that to happen. Which is why many of the quiet revolutions of the last 30 years, the Velvet Revolution, the Rose Revolution in some former Soviet republics, often were triggered by some kind of coordinating signal. Everyone's cell phones went off at the same time. People tied tin cans to the tails of stray cats, which were considered highly subversive and stamped out by the authorities simply because of their common knowledge generating power. So I cite a joke from the old Soviet Union where a man is handing out leaflets in red Square. And of course, KGB arrest him, bring him back to KGB headquarters, only to discover that he's been handing out blank sheets of paper. They confront says, what is the meaning of this? He says, what's there to say? It's so obvious. Everybody quit. And this is a joke about common knowledge. Well, here's the crucial thing. Yes, everybody knows. But when he handed out the sheets and people took them, now everyone knows that everyone knows. And that's what the authorities could not tolerate. And indeed, in Putin's Russia, people have been arrested for carrying blank signs.
Sean Carroll
I don't want to say too much about it because I've done a podcast interview which I recorded before this one, but will air after this one. But one of the interesting results was mentioned in it. It was about people who believe conspiracy theories.
They tend to wildly overestimate how many.
Other people believe those conspiracy theories. Like, if it's something that 5% of the world believes, they think it's 60% of the world believing it. And so you've convinced me that maybe an increase in our overall ability to have not just knowledge, but common knowledge might make the world a more rational place.
Steven Pinker
Well, I mean, there's. There's a. There's a name for that phenomenon. It's called pluralistic ignorance, or a spiral of silence where everyone believes that someone believes that they believe it, but no one actually believes it. And it's a case of common misconception. Common misconception and private knowledge. Yeah.
Sean Carroll
All right, well, we're going to try to clean things up. Steven Pinker, thanks very much for being on the Mindscape podcast.
Steven Pinker
Thanks for having me, Sean. Great conversation.
Sean Carroll
It.
Sean Carroll's Mindscape Podcast, Episode 329
Guest: Steven Pinker
Topic: Rationality and Common Knowledge
Date: September 22, 2025
In this engaging episode, Sean Carroll welcomes cognitive psychologist and renowned author Steven Pinker to discuss the intricate concept of common knowledge—the subtle, often-overlooked foundation that underlies much of human rationality, cooperation, and social life. Centered around Pinker’s new book When Everyone Knows That Everyone Knows, the conversation explores how common knowledge shapes social conventions, communication, economics, embarrassment, collective action, and more. The discussion seamlessly blends real-world examples, striking logic puzzles, and psychological theory—highlighting both the power and the pitfalls of shared understanding.
Sean Carroll’s deep-dive with Steven Pinker is a rich tour of how social reality, reasoning, and human emotions pivot on the subtle but powerful structure of common knowledge. Through puzzles, stories, and psychological evidence, Pinker lays bare why societies succeed or fail to coordinate, why consensus exists, how social rituals work, and how quickly things fall apart when common knowledge is lacking or manipulated. The episode is both a window into human cognitive fragility and a celebration of our capacity for rational collective action—when we are lucky (or wise) enough to truly all be on the same page.