Getting along in society requires that we mostly adhere to certainly shared norms and customs. Often it's not enough that we all know what the rules are, but also that everyone else knows the rules, and that they know that we know the rules, and so on. Philosophers and game theorists refer to this as common knowledge. In Steven Pinker's new book, When Everyone Knows That Everyone Knows..., he explores how common knowledge (or its absence) explains money, power, and a wide variety of subtextual human interactions.
Support Mindscape on Patreon.
Steven Pinker received his Ph.D. in psychology from Harvard University. He is currently the Johnstone Family Professor of Psychology at Harvard University. He is the author of several best-selling books and recipient of several honorary doctoral degrees. Among his awards are Humanist of the Year (two different organizations) and the William James Lifetime Achievement Award from the Association for Psychological Science.
Click to Show Episode Transcript
0:00:00.8 Sean Carroll: Hello everyone, and welcome to the Mindscape Podcast. I'm your host Sean Carroll. A few years ago, back when we still lived in LA, we had a summer project, my wife Jennifer and I, we took boating lessons, climbing on a powerboat back in Marina del Rey, and spend a few hours tooling around, learning to park the boat, tie it to the dock, all these things. I've forgotten everything by now. I don't know any of the nautical terms anymore, but there was a moment, if there was a disaster on the boat, I could help you bring it back to shore and tie it up to the dock. One of the interesting things was, of course, what you do when you're out there on the water and there's another boat that is on a collision course with you, right? Typically, you don't have direct communication with the other boat. You're not on the radio. You can't just say, "Hey, I'm going to do this." You need to have some rules about how to behave in such a way that the two boats don't hit each other. And there are such rules. If you're literally coming right on, then you're supposed to turn to the right. You're supposed to change speed and direction in a decisive way so that the other boat can read your implicit boat language, I guess.
0:01:15.8 SC: The point is it works very well, but the reason it works is not only because everyone, the pilots of both boats know the same rules, but because they know that each other knows the rules, right? So if I'm supposed to veer my boat to the right, that works because both boaters know that they're going to veer to the right and they know the other one is going to veer to the right, so there's a coordination between them and everyone is perfectly safe. This is an example of what philosophers and game theorists call common knowledge. Common knowledge, as we'll talk about in the podcast, it's a slightly misleading term. It doesn't just mean knowledge that lots of people have. It means knowledge that lots of people have and they all know each other has. There's sort of an infer regress. I know that you know it, and you know that I know you know it, and I know that you know that I know you know it, et cetera, et cetera.
0:02:09.7 SC: That's why philosophers love this kind of thing. It also leads to very interesting mathematical results that you can prove in the context of Bayesian reasoning and the presence of common knowledge. There's a very famous theorem called the Aumann Agreement Theorem that says roughly speaking that if you have two perfectly rational agents with common shared priors about a set of claims that could be true or false in the world, prior probabilities, prior credences, and they have some different data, so they reach different posterior probabilities, they've updated their credences differently, but then they talk to each other and they just tell each other what their probabilities are after they got all this data, and they know that each other are perfectly rational, then they should instantly come to agreement. They should basically update their own priors on the basis of the fact that this perfectly rational person has updated their priors, and there is a right place to come to. Instantly is an exaggeration. There's no part of the theorem that says it has to be instant.
0:03:12.7 SC: There can be give and take. But you should not, according to Aumann, if you're both perfectly rational and you start from the same beliefs, you should not be able to agree to disagree. You shouldn't even be able to just disagree and maintain the fiction that both you and the other person are perfectly rational. It seems that we do this all the time, though, right? So it's an interesting question as to why people fall short of the assumptions of Aumann's theorem. Anyway, this whole collection of ideas about common knowledge is the subject of the new book by today's podcast guest, Steven Pinker, who presumably needs no introduction. As we talk at the very beginning of the podcast, part of his overall project of better understanding human behavior. Humans, bless their hearts, are not perfectly rational creatures. But what are exactly the ways in which they fail to be rational? And especially I am becoming increasingly impressed with the importance of the social aspect of how we both really do think rationally, but also how we fall short of being rational. It's dealing with other people that a lot of our both pros and cons come into play as thinking cognitive creatures. So this is an exploration of one aspect of that. Let's go.
[music]
0:04:47.5 SC: Steven Pinker, welcome to the Mindscape Podcast.
0:04:49.3 Steven Pinker: Thank you.
0:04:50.2 SC: You have a book coming out called When Everyone Knows That Everyone Knows about common knowledge, and it's a great topic, I think. But it's a little, I guess, not what I would have expected for your next book to be. So I get it now that I've looked at the book, why you did it, but let's start by putting it into the context of a bigger project. I mean, do you think of yourself as having a big project with all of your technical work and also your books?
0:05:19.6 SP: I do. I'm interested in human nature, what makes us tick, and all of the implications of how we understand human nature. I trained as a cognitive psychologist, and so the subject matter is how people think. And so how people think about how people think about how people think is in some ways a natural extension. And it's also an extension that came about in particular through my work on my interest in language, where one of the basic facts about language, it's known in linguistics for many decades, is that even after we've worked out what all the rules of grammar are, what all the meanings of words are, and there could be an algorithm that could deduce the meaning of a sentence from the meaning of its parts and how they're arranged according to these grammatical algorithms, in practice, people don't say, mean what they say. They beat around the bush, they use euphemism, they use innuendo. If you could pass the salt, that would be awesome. The meaning of that is not, if you could pass the salt, that would be awesome. The meaning is, give me the salt. Or, nice story you got there would be a real shame if something happened to it.
0:06:36.0 SP: Do you want to come up and see my etchings? Gee, officer, is there some way we could settle this ticket here without going to court and doing all that paperwork? We're counting on you to show leadership in our campaign for the future. You've probably heard that in fundraising dinners. So all of these examples, one of the reasons it took so long to have AI understand language is that if you simply give it the algorithms for figuring out who did what to whom based on the rules of grammar and the meanings of words, it will misjudge people's intentions. If you say to a chatbot, can you tell me how to get to Harvard Square from here? Literally, it would say, yes, I can tell you how to get to Harvard Square from here, but that's not what the user wants. The user wants to just give the answer. Anyway, a puzzle that I raised in a previous book, The Stuff of Thought, Language as a Window into Human Nature, had a chapter on I call The Games People Play. That is the all of the rituals that we go through to avoid saying exactly what we mean in so many words.
0:07:48.4 SP: The solution that I proposed there and which I then built on in my own empirical research in cognitive and social psychology and then in a chapter called Weasel Words in the new book. But the idea is that what's the difference between an innuendo that everyone understands and a blurting it out? And I say the difference is generating common knowledge. That is, if he says to her, do you want to come up for coffee? And she says no. She's a grown woman. She knows what would you like to come up for coffee means. There's no plausible deniability of the intention. And he's a grown up. He knows what she just turned down. But does he know that she knows that he knows? He could still think, well, maybe she thinks I'm dense. And she could think, well, maybe he thinks I'm naive. And so even though there's no plausible deniability of the message, there's plausible deniability of common knowledge of the message. And with the additional claim that our social relationships are ratified by common knowledge, that is, two people are friends and each one knows that the other one knows that they're friends.
0:09:01.9 SP: Two people are lovers. Two people are in a position of authority and deference. Two people are transaction partners. All of these everyday social relationships exist because each party knows that the other one knows that they exist. We often try to avoid common knowledge in order to preserve the relationship that we have. We don't threaten it, but we want to get the message across. Anyway, this is a long-winded answer to a simple question of why did you write this book? And the short answer is that my interest in communication and language led to me stumbling on this very rich concept of common knowledge. It had been explored by logicians, philosophers, economists, game theorists. There was a lot in there. And so it was worth a book, so I wrote the book.
0:09:54.4 SC: Well, and I think it's another example, just for me as the physicist doing a podcast, of there's a message that comes across over and over again. I think we've all been told in various ways that human beings are less rational than we'd like to think we are. We have biases and things like that. But what I'm impressed by is how many people are telling me the ways in which human beings reason and communicate and talk are so very, very social. They're not just things we would have invented if we were on a desert island all by ourselves. And common knowledge sounds like a very relevant example of that.
0:10:32.7 SP: Well, indeed, common knowledge, one could argue, I do argue, is the reason that we can be social in the first place. Namely, that common knowledge is necessary for conventions, driving on the right or driving on the left, respecting a leader or department chairman or an expert, respecting paper currency, which, you know, what's the value on a green piece of paper? The value is that I know that other people treat it as having value, which they only know because they in turn think, know that other people treat it as having value. So all of these means of being social, conventions, but also, as I mentioned before, informal social relationships and actions that we cooperate on that where we accomplish something collectively that we couldn't accomplish individually, they depend on common knowledge, on being on the same page. And that, I suggest, is why we evolve language. Language probably co-evolved with sociality. Language makes a lot of social coordination possible. Language depends on social coordination. You have to be in a cooperative relationship to exchange words in the first place.
0:11:45.0 SC: And this reminds me of, I recently did a podcast with your Harvard colleague, Cass Sunstein. He wrote a book on liberalism, and you have to spend the first five minutes explaining what you mean by the phrase on liberalism and what liberalism means. So we're battering around this idea called common knowledge, but it's not simultaneous knowledge. It's a little bit deeper than that.
0:12:07.0 SP: Yes, a little simultaneous kind of announcement, revelation event is the quickest way to generate common knowledge. And it resolves something of a paradox that if you need common knowledge to coordinate, to be on the same page, if common knowledge literally consists of the state where I know something, you know it, I know that you know it, you know that I know that you know it, which makes your head hurt. How do people have a common knowledge that they need to coordinate? The answer is that if something is public, conspicuous, self-evident out there, that can generate common knowledge in a stroke. Not always, but that's the surest way to do it. In general, with words, every word is a convention. Shakespeare said a rose by any other name would smell just as sweet, but we could use the word rose to convey the concept of a rose because everyone follows that convention. We can count on it. When we learn the word rose, we don't then have to poll every person we meet as to whether they understand it the same way. That's just a tacit assumption that kids have to make in order to learn to speak and that we have to make in order to use language.
0:13:18.3 SP: Sometimes, though, as in the case of what do you actually mean by liberalism, it's not foolproof, especially when it comes to either abstract, esoteric concepts, concepts whose common understanding may be relative to the community in which they communicate, cases in which the common understanding changes. Language changes all the time. No one decides, no one legislates the meaning of words. It's a kind of grassroots phenomenon where if people start interpreting a word or using a word differently, that is the meaning of the word. The meaning of the word is common knowledge of what it means. In this case, you didn't have common knowledge with Cass Sunstein, and so you had to stipulate it in so many words. You asked him, hey, what do you mean by liberalism? He said, my definition of liberalism is blah, blah, blah, blah, blah. Sometimes we have to do that. That's not the typical way in which we use words.
0:14:14.3 SC: You'll be unsurprised to learn that on social media, various people reacted to the podcast episode just on the basis of the title without actually listening to the definition of what the words were.
0:14:24.1 SP: Well, liberalism, as we know, has different meanings on different sides of the Atlantic and [0:14:29.9] ____ do you whether Spotify...
[overlapping conversation]
0:14:27.8 SC: He includes Ronald Reagan as a liberal in his definition, so you have to explain why that is. It rubs people the wrong way. But I guess, I mean, maybe simultaneous was the wrong word. I'm just trying to highlight the definition so that we're super duper clear for the audience. It's not about everyone knowing something. It's about knowing that everyone knows something, et cetera.
0:14:49.0 SP: That's what the book is about, about that difference. That is, universal private knowledge is not the same as common knowledge, at least in this technical sense of common knowledge. Now, you and I right now, as with you and Cass, have to clarify what this specialized meaning of the word refers to, because when I use common knowledge, I didn't invent this usage. In fact, I don't even like this usage, but I'm kind of stuck with it. In the technical sense in which philosophers, logicians, game theorists, economists use the term, it refers to the case where not everyone knows something, but everyone knows that everyone knows that everyone knows that everyone knows it.
0:15:29.0 SC: Let's get into the psychological aspects or cognitive science aspects of this. This is, I guess, your home turf. How do we know that some knowledge that we have is common knowledge? I mean, both sort of informally and rigorously. Is it even possible to know all those levels of I know that they know that they know that I know?
0:15:49.6 SP: Well, so I have a chapter in the book on that very topic called Reading the Mind of a Mind Reader. And as I hinted at earlier in our conversation, most of the time the common knowledge is granted by a conspicuous or self-evident event, something that happens within, in a public place where you not only see it, but you see everyone else seeing it and they can see you seeing it, or something that's blurted out within earshot of everyone else. That's the, something is obvious, conspicuous, that's the typical route to common knowledge. We can, in some circumstances, engage in the process that I call recursive mentalizing, where to mentalize means to get inside someone's head. To recursively mentalize means get inside the head of someone who's trying to get inside your head or someone else's head. So sometimes you think about, oh my goodness, he's probably thinking that he's probably thinking. Carry that to the limit and we got common knowledge. So an example would be, say, a rumor that a bank might be in financial trouble. And so you think, well, gee, if I had reason to think that, probably other people do, and they probably are thinking that other people do, and they're gonna withdraw their money because they're afraid that other people will withdraw their money if only out of fear still other people will withdraw their money.
0:17:26.6 SP: I better withdraw my money while there's still money to withdraw because the bank can't cover the deposits of everyone all at the same time. And so you get a bank run. And the bank run didn't begin with a conspicuous signal. The bank is experiencing a run. The bank is in trouble. But it comes from an interplay between some bit of news that leaks out that you then start to extrapolate to what other people might think. Probably a better example, more everyday example, because bank runs don't happen very often anymore. There was one a few years ago at Silicon Valley Bank that got a lot of attention. By the way, the reason that banks don't suffer from runs anymore is that to solve the problem of bank runs, which are generated by common expectation, that is people worried about what other people worrying about other people are worried. Roosevelt, in the midst of the Great Depression, triggered by a cascading series of bank runs, first had a bank holiday where no one could withdraw anything. That was kind of a nuisance, but it was really a good thing because you didn't have to worry about other people withdrawing their money.
0:18:38.3 SP: And then federal deposit insurance, where a bank has a big gold seal emblazoned on their window that says, our deposits are insured. The purpose of that seal is not just to reassure people that their deposits are insured, but to reassure them that other people know that they're insured. So it's less likely that the bank will fail. Before there was deposit insurance, Roosevelt's solution to the problem of bank runs, banks would often flaunt their assets with conspicuous opulence. Even small towns, the banks were made of marble, they had gold lettering, they had spacious lobbies. This was kind of considered an insult to many working people. There's an old folk song from the Weavers, sardonically singing, the banks are made of marble. But it wasn't, the banks weren't just showing off to kind of flaunt their wealth and insult everyday miners and farmers.
0:19:39.6 SP: They were trying to generate the common knowledge that we have enough assets that you don't have to worry about your deposits evaporating because everyone else withdraws money before you do. But anyway, a more common... This is a bit of a digression, why in general we don't have that many bank runs anymore, but we do have hoarding, such as during COVID when people hoarded toilet paper because they thought there'd be a shortage of toilet paper, which they then caused by hoarding the toilet paper, even though there hadn't been a shortage in the first place.
0:20:12.2 SP: It's another case of common expectation, where there we really do engage in recursive mentalizing. No one ever said, go out and buy toilet paper, it's in short supply. People just had to think in their mind's eye of other people grabbing toilet paper because they were worried about it, and then that snowballed into the common knowledge that there's a shortage.
0:20:36.6 SC: And I guess, maybe this is too simplistic, but I'm guessing that all sorts of financial bubbles follow a similar pattern. Maybe NFTs, non-fungible tokens, relied on the common knowledge that these would still be valuable to people someday, that then went away.
0:20:52.8 SP: That's right. Technically common expectation...
0:20:56.3 SC: Common expectation, yeah.
[overlapping conversation]
0:20:55.9 SP: Anything to know, but yes, it's the same phenomenon. Indeed, in fact, that's what bubbles are, and runs, and crashes, and panics. John Maynard Keynes had an explanation of all these phenomena in finance and economics that can't be explained by the standard rational actor models of supply, demand, investment, and so on. So he likened speculative investing, which is what generates these bubbles and crashes, where you don't buy something based on the underlying value of the asset, like someone built a factory, the factory is going to produce so many widgets per year, they're going to make so much of a profit per widget, I'm going to get my share of those profits. I mean, that's kind of the way stock markets ought to work, rationally, but we know that they don't. And the reason is Keynes asked people to imagine a beauty contest. He actually claimed that it ran in the British papers at the time, which is dubious, but the object is not to pick the prettiest face.
0:22:08.0 SP: We used to have Miss Rheingold probably before most of us were born, a beer ad where there were six models and you had to pick the prettiest. No, in this contest, you had to pick the face that the most other people picked as the prettiest, knowing that they were picking a face trying to outguess everyone else picking it. And he said that would often involve... He didn't use the word recursive mentalizing, but that's what he was describing. He said that would often involve the second, third, and fourth order of anticipation of anticipation of anticipation. And mathematically, that can lead to runaway behavior when people want to be in on an appreciating stock, which makes the stock appreciate, which makes more people want to be in until, sometimes this is called the greater fool strategy of investing. That is, you buy something on the hopes that you can sell it at a profit to someone else. Why would anyone else buy it at more than you paid for it? Well, they're hoping that they can sell it to someone else at even more than they paid for it.
0:23:14.7 SP: But soon enough, the market runs out of greater fools or whatever rumor, common knowledge generating salient event triggered the bubble might be contradicted by now a bit of news that causes reverberant fear, fear about fear about fear, and then the bubble can pop. So a lot of the phenomena in finance that don't depend on fundamentals, the irrational exuberance, as Alan Greenspan put it, the crashes, the runs for the exits are phenomena of common expectation.
0:23:51.5 SC: And I don't want the common knowledge to get a bad reputation here from all these examples of financial ruin in our future. But there's all these, I don't know what to call them, puzzles, logic games, examples in your book and elsewhere that I've seen that try to illustrate this phenomenon of common knowledge. And I'll confess, despite being pretty good at math and logic, these are almost never illuminating to me. I think the one that came closest was you had a cartoon in the book of three logicians walking into a bar. Is that something that is explicable in real time?
0:24:25.2 SP: Yes. So that doesn't literally involve common knowledge, but it does involve recursive mentalizing, that is thinking about what other people think. So as I recall the cartoon, the caption is three logicians in a bar. And the waitress comes over and says, does everyone want beer? And the first one says, I don't know. The second one says, I don't know. The third one says, yes. So that's a logic puzzle. And you can figure it out. If everyone wants beer is true, if each one of them wants beer, would be false if anyone didn't want beer. So if the first one says, I don't know, she must want beer, because if she didn't want beer, then she would deduce that everyone wants beer is false. The fact that she didn't say it's false meant that she did want beer. Second one goes through the same logic. The third one, knowing that the first one didn't know and the second one didn't know, now that means that the first one wants beer, the second one wants beer. The third one, if she does want beer, then she can say yes. So she figured out a state of affairs from the epistemic or cognitive state of the other characters in the bar.
0:25:50.0 SP: So that's a pretty... It's intuitive. I mean, you might have to think through it for another couple of seconds. It is similar, not isomorphic, but the way of solving it is similar to what's been called the world's hardest logic puzzle. And this one really is counterintuitive until it's explained. And then it really does make sense. And it goes by various names, the mighty children problem, the barbecue sauce problem. I describe it in terms of a bunch of academics at a conference, some of whom have spinach in their teeth, but no one knows who. And they deduce it from, in this case, they really do deduce it from common knowledge. I mean, I can run through it if you're...
0:26:33.6 SC: Yeah, I'm torn. Why don't you run through it just to illustrate either to the audience that it's harder than it looks or they're much smarter than me. I'm happy to... Because I can get it if I sit down with a piece of paper and think about it, but it doesn't really illuminate right away to me.
0:26:49.4 SP: Same here. So here's the problem. So you've got a bunch of psychologists or academics at a conference in the dining room. Some of them have spinach in their teeth, but there aren't any mirrors around. No one wants to pick their teeth clean if they don't have spinach in their teeth. And everyone's too polite to point out that someone else has spinach in their teeth. But the department chair, who's presiding over the meeting, can't stand it any longer. And at the front, she gets up and she says, at least one of you has spinach in your teeth. Every time I clink the glass, that's an opportunity for you to clean your teeth. Okay. Now, there are... She clinks the glass once, no one moves. She clinks the glass twice, no one moves. She clinks the glass a third time. Three people in the room who have spinach in their teeth all clean their teeth. So they didn't do it the first time, they did it the second time. The third time, they all did. How do they know? Again, there are no mirrors and no one's telling anyone else. And here's the explanation. It's a question of kind of mathematical recursion.
0:28:03.3 SP: That is, if you see the logic for one and then for two, then you can then extrapolate and say, well, it applies to any number. So here's the way it works with one. With one, it's really easy. Okay. Let's say just the state of affairs, the ground truth is that one academic has spinach in his teeth. The department chair says, at least one of you has spinach in your teeth. When I click the glass, you can clean it. So everyone looks around. The guy with spinach in his teeth looks around and no one else has spinach in his teeth. If someone has spinach in his teeth, he knows it has to be him. So that's easy. That's kind of obvious. Now let's go to the case where two people have spinach in their teeth. So again, department chair makes the same announcement. Everyone looks around. So a person with spinach in his teeth sees someone else with spinach in his teeth. He still doesn't know about his own teeth because all she said was at least one of you. So he doesn't know whether to clean his teeth or not.
0:29:10.7 SP: Now she, seeing pretty much the same thing he does, also doesn't know whether to clean her teeth because she doesn't know whether he's the only one. So the first clink of glass, they don't do anything. Now she clinks the glass a second time. Now each one can think, well, geez, if she was the only one, then on the first clink of the glass, she would know to clean her teeth because she would look around and see everyone else's teeth are clean. She knows she has to be the one. She didn't. So therefore, she must have seen someone with spinach in their teeth. I'm looking around. No one else but her has it. It must be me. She thinks the same thing. And so on the second clink, they both know that they have to clean their teeth. That's the logic. If you accept that, then you also realize that three people with spinach in their teeth will clean it on the third clink. If the room has 100 people and 17 have spinach in their teeth, and assuming they're logicians, then they'll all clean their teeth on the 17th clink. But that crucially depends on common knowledge.
0:30:16.8 SP: It wouldn't work if the department chair went over and whispered something in everyone's ear. And if you didn't know that everyone else knew, then the fact that the woman with the spinach in her teeth didn't clean her teeth would not convey the information that she saw someone else with spinach in their teeth. So it crucially depends on common knowledge. Anyway, that's the world's hardest logic puzzle, allegedly.
0:30:41.9 SC: It's pretty hard. Maybe not the world's hardest, but it's unrealistic to think that, having been in faculty meetings, that all the rest of the faculty would be perfect logicians in quite that way. But it is realistic to think that the department chair would be annoying enough not to just say, three people have spinach in their teeth...
0:31:00.5 SP: And you could just count.
0:31:02.2 SC: Would have made it a lot simpler, but okay.
0:31:03.2 SP: It works better if the academics are logicians.
0:31:07.0 SC: Very, very much, yes. And they've probably heard puzzles like this before. I mean, there's a similar counterintuitive result that maybe is a little bit more profound, but it's really at the heart of this whole game, Aumann's Agreement Theorem, which is one of these things that, you know, very trivial kind of thing to prove, conclusion that makes you think that can't possibly be right. So why don't you explain that to us?
0:31:31.8 SP: Yes. So this is a theorem that reasonable people cannot agree to disagree. I mean, that's not exactly right. Anyway, so theorem proved by an Israeli mathematician, Robert Aumann. He won a Nobel Prize, not for this. I guess that happens with Nobel Prize winners there. Ideas that [0:31:52.3] ____ they don't win Nobel Prizes for are ingenious. So this is a very simple. The whole paper is three pages. And he says it's... The idea is simple, but it's not intuitive. Understatement. So here's the theorem. Two rational agents with the same priors in the Bayesian sense of their credence in a hypothesis before they've even looked at the evidence, that is based on their entire understanding of the world, everything that they've discovered so far, okay, who then make their posteriors common knowledge. That is, after looking at the evidence, each one can look at different evidence.
0:32:33.2 SP: They don't have to see each other's evidence. They just announce, my estimate is 0.7 that this hypothesis is true. And the other one announces her posteriors, those posteriors must be the same. That is, it is, they cannot agree to disagree. Now, what's surprising about it? Now, there's a less surprising version, which is if they're completely rational, if each of them shares the evidence that motivated their posterior, their conclusion, then you might say, well, you know, if he's rational, I gotta trust him and there's no reason [0:33:15.3] ____ shouldn't take his evidence seriously just because it's his evidence and it's not my evidence.
0:33:19.6 SP: Evidence is evidence and it's not about me. It's not about him. So that would be a little more intuitive. If you swallow the assumptions that they share the same priors and they're both completely rational. The kind of surprise in it is they don't have to actually share their evidence. They just have to share their posterior. That is their assessment of what the evidence means and that those posteriors must be the same. Now, one way to think about it, this isn't actually how the theorem goes, but it was worked out by later logicians is you can imagine one of them announcing her posterior. That is, I think 0.7 if the hypothesis is true. The other one announces his posterior. I actually only think there's a 0.4 degree of confidence that it's true. Then the first one will say, oh, well, if you say it's 0.4 and I say it's 0.7, I'm gonna now update my posterior. Here's my new posterior and then he updates his and they end up in the same place. So the idea is they have to end up in the same place if they're both rational. Even if neither one gives the basis for the conclusion.
0:34:37.6 SP: The surprise being that if they do it that way, they don't gradually converge and meet somewhere in the middle, which is kind of how we expect arguments to go. Their positions are random walks that end up in the same place, but could go every which way. They could leapfrog each other. They could outdo each other. They could go from a moderate position to an extreme position until the final step in which they end up at the same conclusion. It's a little bit like the spinach and the teeth problem in that... It's only on the third clink that suddenly everyone comes to the same realization. Now this sounds kind of absurd. Isn't it good to agree to disagree? And in a rational argument, don't you meet somewhere in the middle? But it forces us... Like all mathematical theorems, it's only as valid as its premises are true. And sharing priors itself raises a whole bunch of questions, which I...
0:35:41.6 SC: Oh my goodness, yes, yeah.
0:35:45.0 SP: The reason that I discuss this... The reason that I discuss the spinach and the teeth problem, even though they are sort of esoteric mathematical problems, so that, I think they do have implications. So in the case of argument, when you think about it, why should two people meet in the middle? Who says that the truth has to lie halfway in between the opinions of two guys? I mean, what guarantee is it that they'd be straddling opposite sides of the truth? Likewise, why should you privilege your own assessment over anyone else's, on the charitable assumption that they're as rational as you are? Now, of course, I think I'm more rational than everyone else, but I would, wouldn't I?
0:36:31.0 SC: Everyone else thinks that too.
0:36:32.1 SP: Everyone else thinks that too, right? So there really is no reason to privilege your own assessment on the assumption that other people like you are rational. And a final implication is that... And this is a little bit fanciful, but a linguist, George Lakoff, and a philosopher, Mark Johnson, in a famous book, a little book they published 45 years ago called Metaphors We Live By, noted that language contains lots of metaphors that we don't even realize are metaphors, which allow us to talk about abstract concepts in concrete terms. And one of the metaphors they discuss is that argument is like war. I demolished his position, he tried to defend it, but I found the weak spot. And we use the language of war in talking about arguments. And just as a kind of a whimsical thought experiment, Lakoff and Johnson says, well, do we have to think about argument as war? Why don't we think of it as like a dance? And as it happens, the sequence of reaching agreement in Aumann's construction is in some ways more like a dance than like a battle. That is, it's a random walk, and so you can lurch and weave and bob all over the place before arriving at agreement.
0:37:49.8 SP: So this esoteric mathematical theorem might actually have some insight. And again, just to tie it to implications we ought to draw, you know and I know that probably a lot of arguments among academics, among politicians are kind of pissing contests. It's like, who's going to win? Often people use dirty debating tricks. They set up a straw man. They look for a loophole that the person just neglected to mention. It's not the best way of arriving at the truth to make it a combat sport because the truth is the truth. It doesn't care about your ego. If all you're doing is trying to win, that's not the same as trying to get to the truth. And so Aumann's theorem in some ways is an exercise in humility, in epistemic humility, and might press back against the bad habit that we have of seeing an argument as something that we want to win.
0:38:48.1 SC: Well, yeah, it's an ideal theory kind of thing, right? It's not a prescriptive, or sorry, it's not a descriptive kind of thing. It's more like this is what we should aim for.
0:38:56.7 SP: Exactly, and that's exactly the way I present it.
0:38:59.3 SC: And just to make it the assumption super clear, because like you say, the theorem, is as good as its assumptions, and the conclusion that two rational people can't disagree, can't agree to disagree, is that..
0:39:12.6 SP: If they have the same priors, of course.
0:39:13.6 SC: Right, so if they have the same priors, if they're both perfectly logical, and I think if they both agree that each other are perfectly logical, that's common knowledge, right?
0:39:23.3 SP: That is totally right, that not only do the posteriors have to be common knowledge, but each other's rationality has to be common knowledge. You're right, which by the way, also applies to the spinach and the teeth problem.
0:39:35.3 SC: Right.
0:39:36.0 SP: That is, rationality has to be common knowledge, and in general, in game theory, almost everything depends on a background assumption that the parties are rational, and that their rationality is common knowledge. I mean, that's how you psych out the other person, you assume that they're rational.
0:39:52.2 SC: And I guess that my... Without any data, and maybe you have some data to share with us, but I'm guessing that the fact that people do disagree is sort of half because they have different priors, and half because they're just not convinced the other person's being rational.
0:40:08.5 SP: Yes, I think that's a large part of it, and I discuss a paper by Tyler Cowen and Robin Hanson, where they look at the contrast between ideal argumentation and real argumentation, and ask, why don't we behave like the rational agents in Aumann's theorem? And they suggest that people are... They say something that's kind of a commonplace to any social psychologist, which is that people are kind of dishonest in the sense that they don't approve of other people bending the evidence in their favor, setting up the straw man, but they do it themselves.
0:40:50.9 SC: And is there a feeling that, given this theorem, et cetera, that maybe this can inspire us to take more seriously the opinions of others? I'm sort of thinking of, there's a common move you get in the media or social media where people will say, oh, look, I said this, and everyone disagrees with me, I must be onto something, which is sort of the opposite of what Aumann would have us believe.
0:41:15.1 SP: Well, it's the opposite of, yeah, a lot of Bayesian thinking in general. And actually, I think, by the way, this is something that I worked out in my book, Rationality, that I think that the bias in science journalism, but probably in science itself, to favor the paradigm-threatening discovery, going against conventional wisdom, overturning the consensus, the rebel, the upstart, is probably responsible for a lot of error, because what it does is... And science magazines love this stuff, the clickbait is, was Darwin wrong? Was Einstein wrong? And the reason that it's a recipe for error is that it's very un-Bayesian. It's throwing out the priors. It's treating the latest little tidbit of evidence as if it was reason to change your entire understanding. Whereas if there was some reason for a consensus, for the textbook view, sometimes denigrated as the dogma, well, probably a lot went into that.
0:42:24.7 SP: That's your prior. Maybe if there's a contradictory bit of evidence, you should update and decrement your confidence a little bit. But you shouldn't throw everything out the window and just assume that the result of the experiment announced this morning is the truth. I think that's one of the reasons why we've had a replicability crisis, that the journals themselves, but also the science journalism, gives undue weight to the particular discovery and downplays the prior. There's one physicist, I'd never heard of him, maybe his name is Zeman, who said that... And you might disagree with this, this may be a bit of an exaggeration. He says, 90% of what's in the journals is false. 90% of what's in the textbooks is true.
0:43:12.9 SC: I believe that. I get it. I get the spirit of it anyway.
0:43:17.6 SP: Yes. And there's a famous epidemiologist, John Ioannidis, who published a scandalous paper about 20 years ago, Why Most Published Research Findings Are False. And often the reason is that if you just confirm the consensus, you don't get a publication out of it.
0:43:35.7 SC: Exactly. But I'll be honest, I've often struggled with this question of what to do with consensuses. Because on the one hand, you make progress by showing the consensus isn't right, right? On the other hand, the consensus is usually right. So you don't want to sort of default 100% to either side. And I think that...
0:43:57.8 SP: No, no, that's right.
0:43:58.7 SC: Yeah, Aumann's theorem notwithstanding, the real world implementation of it is tricky.
0:44:05.2 SP: No, indeed. I think what we can say is that there is a widespread tendency in science journalism, but also in science journals and among scientists, to over-update in response to a single data point. I think that's probably a bad habit. Going back to Aumann and what does it mean? You've heard of the so-called rationality movement, which is, you know, how could that be a movement? Aren't we all supposed to be rational all the time...
[overlapping conversation]
0:44:39.0 SC: [0:44:39.1] ____ irrationality movement on the other side?
0:44:40.8 SP: Yes. So the rationality movement, it's sort of an official headquarters in Berkeley, is the attempt to call attention to the fallacies and biases that cognitive psychologists and behavioral economics has documented, to try to overcome them, often with Bayesian reasoning. Sometimes the rational community is called Bayesians, but they have kind of canons of argumentative hygiene or best practices, which include things like stating your degree of credence, your posterior as a quantity between zero and one, instead of saying, you know, this is right, that's wrong, saying I have about 0.7 confidence in this. Steel manning rather than straw manning your opponent, that is don't set up a straw man that's easy to knock down, but argue against the strongest version you can imagine. Engage in adversarial collaborations where you get together with your worst enemy and decide a priori what would settle the issue to both of your satisfactions and then go out and gather the data. So all of these practices, which are kind of in the spirit of Aumann, so much so that the conference center in Berkeley that was set up as kind of the home for rationality conferences, its main meeting room is called Aumann Hall.
0:46:08.6 SC: No one comes out disagreeing. That's great. I'd like to do that experiment. Okay, so good. I'm glad that the audience was kind enough to go with us on this little journey of logic and formalization and Bayesian reasoning. But one of the fun parts of your book is sort of demonstrating the implications of common knowledge in our everyday life, right, as human beings. And so you draw an interesting distinction between cooperation, which does rely on common knowledge, and coordination, which also does, but in a sort of more central way, maybe.
0:46:44.0 SP: Yeah, so I mean, admittedly, these are somewhat specialized usages, but cooperation, as it's been discussed in evolutionary biology, and to some extent in economics, in experimental economics, usually versus the case where one person or one animal, for that matter, confers a benefit to another at a cost to itself. And it's a weighty scientific problem, because one could ask how could cooperation, in particular, how could altruism, probably a better term, ever have evolved, given that all things being equal, you'd expect that natural selection would favor selfishness. And since Richard Dawkins' book, The Selfish Gene, there's been a lot of discussion on how cooperation can evolve through reciprocity, through reputation, and so on. But what I came to realize is that a lot of cases of organisms conferring benefits on one another are not altruistic, but they're in the sense that one of them incurs a cost, raising this puzzle, but are mutualistic. That is, everyone wins. So in the case of a bird that picks ticks off the back of an ox, the bird doesn't have to be repaid. The ox doesn't have to pick ticks off the back of the bird, even if it could.
0:48:07.3 SP: But the bird gets a meal, the ox gets fewer pests, and everyone wins, except the ticks.
0:48:15.6 SC: Don't get a vote.
0:48:17.2 SP: And so these are cases that biologists call mutualism. And a lot of human working together is not altruistic, but it's mutualistic. Both parties win. A potluck dinner, you bring complementary courses. You're not doing someone a favor by not bringing dessert while they bring dessert. Both of you end up better off that way. Two people meet for a coffee date. It's important that they both pick the same place, but no, neither one is doing the other a favor by going to that place. They both want to end up in the same place. The reason that the mutualistic coordination is also a scientifically interesting problem is not because of the danger of being exploited as an altruistic cooperation where I keep getting all the goodies, but I never repay them when my turn comes. The problem in coordination is one of knowledge, namely, how do you end up on the same page, given that it isn't enough to know what the other guy's going to do. You have to know the other guy knows what you're going to do, and so on and so on and so on. So the problem of coordination, the logical problem of coordination, requires common knowledge as its solution, in turn raising the question for a psychologist, how do people attain common knowledge?
0:49:43.5 SC: You know, this is probably petty of me, not even petty, because I have no dog in the fight, as it were, but as soon as you mentioned Richard Dawkins and the selfish gene, I thought of a few years ago there were these debates about the origin and explanations for altruism, et cetera, centered on kin selection versus group selection. I'm sure you were aware of that at the time. They were not perfect examples of Aumannian rationality at work, I thought.
0:50:13.3 SP: Perhaps not.
0:50:14.0 SC: It gets pretty vitriolic among the people.
0:50:14.7 SP: If anyone wants to dip into that, you can Google a paper that I wrote a number of years ago called The False Allure of Group Selection.
0:50:24.5 SC: Oh, okay. So...
0:50:25.7 SP: It was published on Edge and has commentaries by some defenders of group selection. I think the whole notion is rather confused. Perhaps I was not being as rationalistic as the rationality community would recommend. Maybe I did not steel man the defenders. I think I did, but the thing about all these things is I'm not the one to judge. Of course I would think that I am.
0:50:51.1 SC: Exactly. But also, to be fair to us as academics, we do have this practice of writing something, publishing it, and then inviting responses, including from people who disagree with us, which is something that might be a model for elsewhere in the world.
0:51:06.8 SP: Indeed. Which is why academic freedom, another one of my hobby horses, I co-founded the Council on Academic Freedom at Harvard, is so, I would argue, indispensable. Not because academics are special, they should be allowed to do whatever they want, we're not privileged compared to anyone else, but just because without academic freedom, you can't converge on the truth because the process you just described, namely, you publish something, you might be right, you might be wrong, you don't know, we'll only know when other people get to attack it, try to falsify it. If you disable that process by canceling or punishing someone for what they believe, you're never going to find out what's true or false.
0:51:47.5 SC: Right. Okay, good. So let's dig into more this cooperation-coordination question. I mean, there does seem to be... Or maybe I'm perceiving where it's not there, but there's a chicken and egg or apples and oranges problem. Like, how do we all know that we have the common knowledge to drive on the correct side of the road, et cetera? Is this something that, you know, is a capacity that human beings have? Is it different among different species?
0:52:12.9 SP: Yeah, I think we do have, and other species do have coordination problems that they have to solve by... Not by common knowledge, because other species, most of them aren't very bright, but through a similar mechanism, namely a conspicuous public event. So I mentioned that's the typical way in which we humans gather common knowledge, which we use in all kinds of ways to coordinate, evolutionarily unprepared, like money or organizations and institutions. But even an organism as simple as coral, which doesn't even have a brain to have thoughts, let alone thoughts about thoughts, but they face a coordination problem because they're stuck to the ocean floor. They've got to reproduce. They can't go out on dates. They can't even have intercourse because each one of them... They're sessile. They're stuck to the floor. What do they do? Well, they spew gametes, eggs and sperm into the ocean, but in the hope that they'll meet up with their counterpart from some other coral. But the problem is they can't spew out eggs and sperm 24-7. It's kind of metabolically expensive.
0:53:27.1 SP: It's in all of their interest to kind of somehow agree in scare quotes on what day to do it. Now, they can't talk. They can't think. They have to kind of tacitly agree or behave as if they agree. And the way they do it is they use the full moon as the, in a sense, common knowledge generator. A fixed number of days after the full moon differs for different species, they engage in what marine biologists call the Great Barrier Reef Annual Sex Festival. Namely, five days after the full moon, they all spew. And so that the egg and the sperm find each other. And they don't literally have common knowledge, but they solve a coordination problem by a public conspicuous event.
0:54:13.0 SC: So that absolutely makes me think of an analogy I just thought of that will be completely useless to everyone but myself. But I feel the need to give it anyway, which is the horizon problem in inflationary cosmology. When we look at different directions of the sky and see the relic microwave background radiation from the Big Bang, they're the same temperature, even though they were never in causal contact with each other in the traditional cosmology. How did they know to be at the same temperature, even though the temperature changes with time? And the inflationary universe scenario is a big common event that actually sort of tells them to set their clocks in the same way, and therefore they can be more or less the same temperature.
0:54:57.5 SP: Interestingly. So they're too far away to exchange information. They are faster than the speed of the light.
[overlapping conversation]
0:55:03.2 SC: Faster than the speed of light. That's right.
0:55:03.4 SP: But rewind the clock and there's a point at which they were cheek by jowl.
0:55:07.5 SC: When you introduce a phase of inflationary expansion at early times, now they were in causal contact with each other. And this tiny little patch of space expands to put them so far apart. It looks like they were never talking to each other. But in fact, there was a secret communication.
0:55:26.0 SP: Interesting. It's a secret interaction...
0:55:29.1 SC: Not going to help when you're out there on the street trying to explain these things, but that's okay. But it got very interesting in the book, which I do recommend to people. It sort of goes both ways, this coordination problem. You already hinted at this earlier on, but sometimes we are abetted by taking advantage of common knowledge and everyone drives on the right side of the road. Other times we sort of don't want there to be common knowledge or we speak in intentionally elliptical ways so we can have some plausible deniability.
0:56:02.1 SP: Exactly. And that's in a chapter of the book called Weasel Words. I discuss why we so often speak in euphemism and innuendo and hints. And also why, even in the nonverbal equivalent, we avoid eye contact, for example. I have another chapter called Laughing, Crying, Blushing, Staring, Glaring on nonverbal displays that I argue are common knowledge generators. So eye contact, you're looking at the part of the person that's looking at the part of you that's looking at the part of them, et cetera. Blushing, you feel the heat inside your cheeks at the same time as you know other people can see the reddening on the surface of your cheeks. And they know that you know that you're blushing. Laughter, your speech is interrupted, your breathing is interrupted at the same time as other people can hear the staccato sounds of laughter. So all of these are common knowledge generators that sometimes we try to avoid. We stifle a laugh, we choke back a tear, we avoid eye contact. Hence, sayings like, can you look me in the eye and say that when someone is trying to avoid common knowledge and you're trying to generate it.
0:57:19.7 SC: I truly don't know the answer to this, are the rules and implications of things like eye contact and blushing universal among cultures?
0:57:27.7 SP: Probably like a lot of universals, they're kind of statistically universal.
0:57:34.4 SC: Not 100% sure.
0:57:36.0 SP: Yeah, depending on the context with some exceptions and some parametric variation. But probably eye contact, for example, as a potent signal, often a signal of threat, I suspect is universal. It certainly operates in other primates.
0:57:55.7 SC: But it's also a signal of romantic interest.
0:57:59.2 SP: Yeah, so we humans just kind of take what we evolved with and we repurpose, we repair it. So eye contact, which in other primates is generally a threat signal. The dominant stares at the subordinate who looks away. If their eyes lock, there's going to be a fight. And that's also true of humans as in the barroom taunt, you looking at me or the ultimate fight club stare down where the two of them look into each other's eyes and see who flinches. But in humans, as in the... Can you look me in the eye and say that, eye contact is more general. It's a signal that what has so far been private knowledge between us is heretofore common knowledge. And one of the most common examples is flirtation. In flirtation, as with the dominant staring at the subordinate who looks away, the flirter looks at the flirtee who then kind of looks away, keeping it at a level of flirtation. If their eyes lock, that often means that something serious is going to happen. My late colleague Irv Devorre, a biological anthropologist at Harvard, used to tell his class, if two people anywhere on earth look into each other's eyes for more than six seconds, then either they're going to have sex or one of them is going to kill the other.
0:59:25.5 SC: Is this something that we could even raise to the level of a predictive theory? Can we think this way and make predictions for psychology experiments we haven't done yet?
0:59:34.7 SP: Oh, yes. And I do that in the book. I've published a fair amount of experimental work testing predictions from ideas of common knowledge.
0:59:44.1 SC: Can you give us some examples of that?
0:59:47.7 SP: Yeah. Let's see. We did a study on self-conscious emotions, that is, embarrassment, shame, guilt. And the hypothesis was that what makes someone feel self-conscious is not so much that some faux pas or infraction was detected, but rather that you acknowledge that it was detected. That is, it's the common knowledge that drives the acute embarrassment. That each of you can get away with something if you don't... I'll give a kind of a rude example, but let's say you pass gas and it's audible enough to suspect others have noticed it.
1:00:37.8 SC: Everyone knows.
1:00:39.8 SP: However, if you were then to meet someone's gaze, that would be way worse. I mean, you could kind of look away, pretend they didn't hear it, pretend that they don't know that you noticed them hearing it, but that what is truly mortifying is the common knowledge. And so we had people imagine themselves in various compromising circumstances and varied the levels of knowledge of does the onlooker know? Do you know the onlooker knows? Does the onlooker know that you know that they know? And so on. And we found that indeed what was most mortifying was common knowledge. Then, because we didn't want to just do it hypothetically with people kind of fantasy playing, we actually put them in a circumstance in which they could be embarrassed. Namely, they had to give a karaoke song performance. In this case, we chose Adele's Rolling in the Deep, which has a soaring chorus. And they were told that their vocal stylings were being judged by a panel of fellow students. And they could see their fellow students in a video feed. In reality, it was a recording, but we told them it was live. And either they thought that the panel of judges knew that they were seeing the panel of judges, or that the panel of judges did not know that they, the singer, knew that they were being observed.
1:02:09.4 SP: And then we asked them, well, okay, now you've sung the soaring chorus, how embarrassed did you feel? And they felt way more embarrassed if they thought that the judges knew that they knew that they were being judged by the judges. And there are everyday examples where two people, one of them might suddenly realize that the other one kind of insulted them or worked against them. And each one of them can kind of even suspect the other, but as long as they keep it, neither one says it, they can maintain their friendship and neither can feel embarrassed.
1:02:48.0 SC: Is it an example of common knowledge that something embarrassing just happened, but we're not going to acknowledge it?
1:02:56.3 SP: So if we don't acknowledge it, I suggest that's what keeps it out of common knowledge. That is, there could be private knowledge. There could even be reciprocal knowledge. I know that he knows, he knows that I know. But they may not be common knowledge, that is, I may not know that he knows that I know, et cetera. My argument is that it's the common knowledge that drives our relationships and most strongly drives our self-conscious emotions, our awkwardness, our shame, our mortification, our embarrassment, our outrage.
1:03:29.6 SC: I guess I'm just asking, could there be relevant examples where there is common knowledge? We all know something just happened, we all know it's embarrassing, but we socially agree not to acknowledge it. That helps us.
[overlapping conversation]
1:03:41.3 SP: We do. Oh, yes. That is the elephant in the room. That's the pretending to look away. Think of it as a common pretense. There is some common knowledge there. The common knowledge is that we pretend as if something opposite to reality is the case. I have a discussion of how, if someone has a speech impediment, if someone is obese. Everyone knows it, but you try to avoid talking about it. It's a little bit odd, and one could even argue that it is dysfunctional. I have reproduced an interview by a woman named Lindy West who, quote, came out as fat. Now, that is, it's a little bit weird, the analogy of coming out as gay, because as the interviewer said, no one's going to say, oh, my God, dude, I can't believe you're fat. [1:04:46.2] ____ But she said, the burden... I appreciate people's considerateness, but the burden of pretending that I'm not fat kind of distorts things, and I think we'd be better off if, look, I'm fat, and you know that I'm fat, and let's just get it on the table.
1:05:00.7 SP: By the way, get it on the table, another metaphor for common knowledge. Now, not everyone would go along with her, and I certainly would not be the first to say that one of my companions has a high body mass index, even if it were obvious to everyone, but it shows the tension between what everyone knows and what everyone knows that everyone knows, and this common pretense, this elephant in the room metaphor of pretending that something... Commonly pretending that something isn't true.
1:05:37.2 SC: Yeah, I'm feeling now... I don't know whether weighted down or energized by the knowledge of all these sort of conventions that we have chosen to sort of get through the day and how they can fail. I was once told, I have no idea whether this is true, that the reason, one of the reasons why France and Germany always had wars with each other is because the French get insulted when you don't fill them in on everything you know, and the Germans get insulted if you assume they don't know something and you try to tell them. So whenever they would have peace negotiations, they would end up in recriminations.
1:06:13.2 SP: Well, interesting. So whether or not, that's literally true, what is true is that an awful lot of wars are fought over saving face, losing face, honor, humiliation, including the war in Ukraine right now. What is it about? It's really about Russia's desire to undo their humiliation at the hands of the West. There's the scene in Duck Soup, the Marx Brothers movie, in which Sylvania goes to war against Fredonia because Groucho Marx, the ambassador, imagines what it would be like if the other ambassador refused to shake his hand. And so...
1:06:52.7 SC: The levels of trying to anticipate what the other person is thinking. This is something very familiar to poker players, right?
1:06:59.1 SP: Oh, yes.
1:06:59.8 SC: Because you have to say, I think this person thinks that I think this, that I think this. And I presume that in poker, since it's a finite state game, there's only some certain number of things that can happen, there should be some equilibrium that you eventually hit. But are there psychology studies on how good human beings are at going to the level of thinking the other person thinks something and then that they think that I think something?
1:07:23.7 SP: So I don't know how many studies there are, but what I do know is two psychologists, one of whom was a former student of mine, have made careers in poker, each one becoming a celebrity in the process. Maria Konnikova, who was my undergraduate student advisory.
[overlapping conversation]
1:07:43.2 SC: Former Mindscape guest.
1:07:43.2 SC: At Harvard, and Annie Duke, who I knew as a student, she wasn't my student, but she originated in the same field as me, child language acquisition, before making the leap to being a card sharp. But both of whom are cognitive psychologists, gifted cognitive psychologists, and who presumably put their tacit knowledge to work. Now, poker is very interesting because we have the expression of poker face, that any tell can be used against you, and it's obviously a quintessential game theoretic situation. In fact, John von Neumann invented game theory to deal rationally with poker because it was a game of imperfect information, a game of strategy. It was not like chess, which was perfectly determinate. Poker involves bluffing and calling and so on. And it's a case in which either a poker face, and in some cases, an ability to be perfectly random is an advantage, because as soon as you deviate from being random, that is something your opponent could use against you.
1:08:56.9 SP: So it's an outguessing standoff, as in, say, in hockey, the shooter can shoot left or right, the goalie can defend left or right. If either of them has a preference, then the other one can use it to their advantage. The optimal shooter and the optimal goalie have a mental random number generator.
1:09:15.9 SC: Which is very hard for human beings to do.
1:09:17.6 SP: Which is very hard for humans to do, yes.
1:09:19.5 SC: Yeah, that reminds me, I mean, maybe this is a good place to sort of wind up about... And I'm not sure about whether this is a relevant example or not, because I beat it up rather than getting it from your book, but sort of abuses of the idea of common knowledge. And I'm thinking the example I thought of was the okay sign, when people hold up their fingers to show okay, and how this has been co-opted by white power groups to show that they're in that in-group. But then when people say, oh, you're making the white power sign, they say, what do you mean? I'm just making the okay sign.
1:09:49.4 SP: Well, in a notorious case, an innocent truck driver got fired.
1:09:53.4 SC: Oh, I didn't know that.
1:09:54.3 SP: When someone caught him on a cell phone video making the okay sign. This [1:10:03.2] ____ he didn't have a racist bone in his body. He was Hispanic and he lost his job. This is what led to [1:10:11.6] ____ to write an article kind of at the peak of, during peak wokeness, said stop firing innocent people. But yes, just going back to common knowledge, common knowledge is relative to a community of knowers. You have common knowledge within some network. And if you're not part of that network, what's common knowledge to all of them may not be common knowledge to you. The common knowledge may not include you.
1:10:37.9 SC: And is it an exaggeration to think of a failure of common knowledge gets in the way of like stopping dictatorships? If you have a populace or even an establishment that all wants to stop somebody from doing something, not going to mention any names, but just hypothetically, but none of them wants to be the first mover or whatever. There's a coordination problem there because a single person resisting will be stomped down even if everyone resisting at once would succeed.
1:11:06.0 SP: Totally, big time. And this was a point made by Michael Chwe in a book called Rational Ritual, kind of predecessor to mine 25 years ago or so, where he noted that public demonstrations can generate common knowledge when everyone in a public square can see everyone else. And that can give them the safety in numbers to coordinate resistance, whether by storming the palace or by engaging in work stoppages. I quoted, he could have quoted, but I came across or called to mind a quote from the character Gandhi in the eponymous movie, where he tells a British colonial officer, in the end, you will leave because there is simply no way that 100,000 Englishmen can control 350 million Indians if the Indians refuse to cooperate. So that captures it, but he could have said coordinate. That is, 100,000 Englishmen can control 350 million Indians if they can control them one at a time. They just can't control them all at once. And so it can be a demonstration in a public square. It can also be a newspaper article or a magazine article. That's why autocrats don't allow freedom of the press, why they have censorship and repression.
1:12:23.3 SP: The Arab Spring was kindled by social media, by Facebook and Google, until dictators kind of cottoned on to that danger and started to control the internet.
1:12:38.3 SC: I mean, famously, a few dozen Spaniards kind of did conquer millions of indigenous Americans back in the day because they were not able to coordinate in any way.
1:12:48.7 SP: Well, yes, and they were helped along by, as Gerard Diamond put it, guns, germs, and steel.
1:12:53.9 SC: The germs helped, absolutely, yeah.
1:12:55.8 SP: The germs helped too, yeah.
1:12:57.4 SC: So I love the point about the demonstrations. It's an interesting point. I mean, I think of demonstrations as largely making the demonstrators feel good. I'm in favor of them. If you feel about it, go ahead and demonstrate. But just the symbolic act of letting other people know there is so much resistance out there can have helpful coordinating effects.
1:13:20.2 SP: Yeah, that's why it'd be different than, say, a public opinion poll that showed that a majority of people are disgruntled by the regime, or at least if the opinion poll itself doesn't become common knowledge. So if I was a rebel and I had the results of a confidential opinion poll, it wouldn't do me that much good. Oh, great, everyone agrees with me, but I'm still going to get imprisoned if I protest. But if everyone does it at the same time, which they will do only if they know that the others will do it at the same time, and common knowledge is necessary for that to happen, which is why many of the quiet revolutions of the last 30 years, the Velvet Revolution, the Rose Revolution in some former Soviet republics, often were triggered by some kind of coordinating signal. Everyone's cell phones went off at the same time. People tied tin cans to the tails of stray cats, which were considered highly subversive and stamped out by the authorities simply because of their common knowledge generating power.
1:14:29.7 SP: So I cited a joke from the old Soviet Union where a man is handing out leaflets in Red Square, and of course, KGB arrest him, bring him back to KGB headquarters, only to discover that he's been handing out blank sheets of paper. They confront him and ask what is the meaning of this? He says, what's there to say? It's so obvious.
1:14:48.0 SC: Everybody knows.
1:14:50.5 SP: And this is a joke about common knowledge. Well, here's the crucial thing. Yes, everybody knows, but when he handed out the sheets and people took them, now everyone knows that everyone knows, and that's what the authorities could not tolerate. And indeed, in Putin's Russia, people have been arrested for carrying blank signs.
1:15:08.0 SC: I don't want to say too much about it because I've done a podcast interview which I've recorded before this one but will air after this one, but one of the interesting results was mentioned in it. It was about people who believe conspiracy theories. They tend to wildly overestimate how many other people believe those conspiracy theories. If it's something that 5% of the world believes, they think it's 60% of the world believing it. And so you've convinced me that maybe an increase in our overall ability to have not just knowledge but common knowledge might make the world a more rational place.
1:15:42.9 SP: Well, there's a name for that phenomenon. It's called pluralistic ignorance or a spiral of silence, where everyone believes that someone believes that they believe it, but no one actually believes it. And it's a case of common misconception and private knowledge.
1:16:01.0 SC: All right. Well, we're going to try to clean things up. Stephen Pinker, thanks very much for being on the Mindscape Podcast.
1:16:05.6 SP: Thanks for having me, Sean. Great conversation.
Pinker using ‘she’ when normally one would say ‘he’ – woke.
Two of my favorite critical Thinkers. Thanks for dialoging out loud through this podcast. Great subject and research Steven! Thank you both for this!
Pingback: Sean Carroll's Mindscape Podcast: Steven Pinker on Rationality and Common Knowledge - 3 Quarks Daily
Just had a thought about the spinach-in-teeth problem:
> She clinks the glass twice, no one moves. She clinks the glass a third time. Three people in the room who have spinach in their teeth all clean their teeth.
Assuming quick thinkers (with the common knowledge that they’re quick thinkers 😉), and assuming that a few seconds pass between clinks, since no one moved immediately after the 2nd clink, that implied that there would be at least 3 people with dirty teeth, and these 3 would have then known they were the ones *before* the 3rd clink!