113 | Cailin O’Connor on Game Theory, Evolution, and the Origins of Unfairness

You can’t always get what you want, as a wise person once said. But we do try, even when someone else wants the same thing. Our lives as people, and the evolution of other animals over time, are shaped by competition for scarce resources of various kinds. Game theory provides a natural framework for understanding strategies and behaviors in these competitive settings, and thus provides a lens with which to analyze evolution and human behavior, up to and including why racial or gender groups are consistently discriminated against in society. Cailin O’Connor is the author or two recent books on these issues: Games in the Philosophy of Biology and The Origins of Unfairness: Social Categories and Cultural Evolution.

Support Mindscape on Patreon.

Cailin O’Connor received her Ph.D. in Philosophy from the University of California, Irvine. She is currently Associate Professor of Logic and Philosophy of Science and a member of the Institute for Mathematical Behavioral Science at UCI. Her works involves questions in the philosophy of biology and behavioral science, game theory, agent-based modeling, social epistemology, decision theory, rational choice, and the spread of misinformation.

[accordion clicktoclose=”true”][accordion-item tag=”p” state=closed title=”Click to Show Episode Transcript”]Click above to close.

0:00:00 Sean Carroll: Hello everyone, welcome to the Mindscape Podcast. I’m your host, Sean Carroll. Long time listeners will know that we’ve talked about game theory before on the podcast, usually in the context of literally playing games. We’ve had no less than two different podcasts about playing poker or realising that there are lessons from poker that you can apply more widely. And in fact, I think both times we noted that John von Neumann, who is the father of game theory, invented it in order to analyse his local poker game. We also talked to Frank Lantz, who is a game designer about game theory. But the wonderful thing about game theory is that it applies much more broadly than that. It’s a lesson in strategic interactions between agents with different interests, right? So that’s obviously a hugely broad kind of conceptual framework in which to think about a whole bunch of different issues. So today we’re gonna be talking to Cailin O’Connor, who is a philosopher of science. She has quite a broad portfolio in terms of research interests, but one of the things she’s done, she’s written a couple of books on applying game theory to both biological evolution, thinking about what species evolve, how they interact with each other, who’s predator, who’s prey, how they fill different niches and so forth.

0:01:14 SC: And also one applying game theory to human behaviour. Why do human beings treat each other in different ways? And in particular, can we understand the origin of certain inequities in society in game-theoretic terms? Now, this is a very difficult thing to do, right? Because when you have an inequity in society, one group is discriminated against or picked on, another group is privileged and has more wealth or resources or power or whatever, there’s probably a whole host of reasons why, and people will debate them, but game theory sort of presents an interesting new twist on this kind of problem. Imagine that you’re a kid and you have a sibling, and your mom says to the two of you, to you and your sibling, “Okay, there’s a pint of ice cream here, you each tell me what fraction of the pint of ice cream you would like to eat, and if the total that you say is less than the whole pint of ice cream, less than or equal to it, then I will split it up according to the rules that you just suggested. But if you suggest a total that is more than one, more than the whole pint, then nobody gets any ice cream.” Now, of course, the fair thing to do would be for you and your sibling to say, “I each want half of the pint of ice cream,” then you would both get half. Everybody’s happy. But what if you know from past experience that your sibling is always very greedy and they’re almost guaranteed to ask for two-thirds of the pint of ice cream, okay?

0:02:41 SC: So you could stubbornly ask for your half ’cause that’s the fair thing, but then neither one of you get anything. You are both worse off, okay? So because your sibling is going to have this strategy and you know it, in a strictly utilitarian sense, it makes sense for you to only ask for a third of the ice cream. So you can see how why this kind of analysis, inequities creep into the system, even though there’s no difference between the two players of the game initially. So I find this all extremely fascinating. It’s sort of a completely different lens with which we can use to analyse biology, psychology, sociology, a whole bunch of interesting things. It’s nowhere near the whole story in any of these cases, but different angles, different perspectives are always very welcome. So I think you’re gonna find this conversation very interesting. The other announcement to make, of course, is that for those of you who’ve been following along, you know, I have this video series called The Biggest Ideas in the Universe, and we’re done. We have reached the conclusion. Last week was the last video, there were 24 different ideas, the last idea was science, that was a bit more philosophical and meta, the other ideas are things like space, time, matter, gravity, stuff like that, physics ideas, physics and cosmology ideas.

0:04:01 SC: We had some off-the-wall ones, we did emergence and renormalization and criticality and complexity, but anyway, they’re all done. There is one, every week, there is one video and one Q&A video. You can find them all on my YouTube channel. I do have a YouTube channel, it’s youtube.com/seancarroll, there’s no M in there, like Patreon supporters or Twitter followers might know about, seanmcarroll is what I usually use, but for some reason on YouTube, it’s just seancarroll. Anyway, if you haven’t checked them out, I encourage you to check out the videos. If you’re interested in physics, you get to see me as well as listen to me. Although, yeah, in fact, since I’m drawing and writing equations in most of these videos, I think that the visual aspect of it is actually important. You could also see my improvement as a videographer and cinematographer over time as I figure out how to do the green screen and so forth, and you can also see that I’d never improve all the way. I was still making pretty elementary mistakes in my video skill package as time went on. But it was a lot of fun. We got a lot of viewers, which warms my heart and it’s always good to see that people are interested in learning new things about the universe. So with that, let’s go.

[music]

0:05:29 SC: Cailin O’Connor, welcome to the Mindscape Podcast.

0:05:32 Cailin O’Connor: Oh, well, thank you so much for having me, Sean.

0:05:34 SC: So we’ve talked to philosophers before, we’ve talked to biologists, I’m trying to think, I’m not sure that I’ve really done any philosophy of biology. Is that fair to… As a way to characterize what we’re gonna talk about today?

0:05:49 CO: Yeah, I think a lot of what we’re gonna talk about would fall under that. We also might call some of it philosophy of social science. Yeah, I mean, the philosophy of biology is this nice interdisciplinary area where some people in it do things that look a lot like theoretical biology, and some people do things that are asking more meta-level questions about the biological sciences, what are the concepts that we use in biology, what do they really mean, what can the methods in biology really tell us or not tell us? So yeah.

0:06:22 SC: And game theory is something that I don’t know why, but in the past year of my life, I just keep running into it in different circumstances. You’re not the only podcast guest who will talk about game theory a little bit with us. So how does… Well, let’s… Even before we relate it to biology, why don’t you tell us in your brain how you think about what game theory is and what it’s good for?

0:06:44 CO: Okay. So… I mean, a little history, game theory was really introduced sort of in the… Starting off in the 1940s and ’50s, and it’s a branch of mathematics where the goal is to look at strategic interactions. And by strategic interaction, I just mean an interaction where you have multiple actors who have some kind of interest, so these could be humans, but they also could be animals or even something like trees or bacteria. And they interact with each other and it matters to them what the other actors are doing. So it’s the branch of math that tries to represent and tackle this kind of interaction. And the way you do it is by building what’s called a game. So this is a sort of simple mathematical model of a strategic interaction, and then analysing that game to try to ask, “Can it help us predict what certain actors would do when they’re in a strategic scenario?” Or, “Can it help explain behaviours that we see in the real world, either in humans or in the biological world?”

0:07:48 SC: And one of the big ideas here is that we can truly exactly quantify the rewards for what we do, right? Like one of the buy-ins for using game theory for anything is to say, “If you do this strategy and your opponent does some other strategy, then here’s the rewards that you each get.”

0:08:05 CO: Yeah, and I think the way you wanna think about this is that people using game theory don’t assume that we can exactly quantify what are the real rewards, let’s say, you and I would get if we manage to coordinate some behaviour, if we manage to meet each other at the right time for coffee. I mean, it would be hard to quantify, like is this a psychological thing, and how much pleasure do we get or what material goods do we get? But rather the idea is we can use specific numbers to represent those rewards, and then by making them precise, we can get a kind of good enough or approximate representation that we can also analyse and better understand.

0:08:45 SC: Yeah, I think that’s pretty fair, although, this is gonna be one of those things when we start applying it to biology and to psychology and to sociology, where people are gonna poke at it in different ways when you reach conclusions they don’t wanna reach, right? So, and maybe one of the pokes is, “Are we overly rigorizing things that are a little bit fundamentally fuzzy?”

0:09:05 CO: Yeah, there’s always this risk with modeling. I mean, modeling can be so useful and that it allows you to make these really precise structures that you can analyse precisely and say very concrete things about them, and I’ll be talking about the exact same thing. Of course, the downside to that is that you’re gonna lose some of the precision of the real world or you might represent it incorrectly in some really significant way, and so there are always gonna be these deep questions about, “Is your model the right model? Is the conclusion you draw from it gonna be supported in a more complex reality?”

0:09:39 SC: Well, that’s okay. As a physicist, I’m very, very used to overly idealizing complex situations into very simple little models…

0:09:46 CO: [chuckle] Yeah, right.

0:09:46 SC: That part, we’re used to that part. So why don’t you… I mean, there’s some famous games, right? There’s the prisoner’s dilemma, I mean, we’re not thinking, even though you could, we’re not really thinking about backgammon or basketball or things like that, but those are more advanced games. The little games that we’re analysing here are… Basically have these reward tables. So why don’t you fill us in on how we should be… What is the mental image we should be having when we’re playing these games?

0:10:11 CO: Well, I should note that when von Neumann was first inventing game theory, he did… I think one of his first papers was called The Theory of Parlor Games, so he did have some…

0:10:19 SC: Wow.

0:10:19 CO: Kind of real games in mind. But the games you see in game theory, they’re not the fun kind. In fact, I teach a class called evolutionary game theory, and I sometimes started the class being like, “If you’re here because you like playing games, scoot on out, [chuckle] this isn’t the place for you.”

0:10:32 SC: No Twister here. [chuckle]

0:10:35 CO: No Twister, it’s not gonna be World of Warcraft or StarCraft. So, okay, a game in this sense, I mean, you can define it using four elements. So the first thing is the players, who is involved in the strategic scenario. The second thing is their possible strategies, what things can those players do, what actions you might say. The third thing is their pay-offs. So given some combination of strategies chosen by the players or actions that they do, what are the kind of benefits or detriments to them. And then information is the last thing that helps define a game. And information is approximately what do the players know about what they’re doing. So what do they know about the setup of the strategic scenario they’re in, and what do they know about the other opponents or players they are involved with.

0:11:27 SC: So chess is a game of complete information ’cause you see the board, but poker, you don’t see the other person’s cards.

0:11:34 CO: Right, right. And we can think of lots of different kind of strategic scenarios where you might have more or less information. So if you’re bargaining with someone, that’s a strategic scenario, and you might know what’s gonna happen to this person if your bargain fails or you might not know that. And knowing or not knowing might really shift how you bargain. So if you know that this person doesn’t have anything else to do if this bargain doesn’t work, they’ll be in a really bad situation, you might be a more aggressive bargainer on the basis of that knowledge.

0:12:04 SC: I guess it’s probably, I mean, you should tell me, but it’s probably worth doing the prisoner’s dilemma in some detail, just because it’s a wonderful example of some of the concepts that appear, but also some of the counterintuitive conclusions you can reach by taking this seriously.

0:12:20 CO: Yeah, so the prisoner’s dilemma is the most famous and widely analysed game in game theory because you’re right, it is a kind of fascinating little game. So you assume you have two players and you assume there’s two things that each of them can do, and people often call these two strategies cooperate or defect. So I’ll tell the little kind of story people like to tell about the prisoner’s dilemma. It’s like you’ve got two prisoners and they’re each told, “Well, you can either rat on the other one or stay silent.”

0:12:53 SC: Yeah.

0:12:53 CO: So staying silent would be cooperating and ratting would be defecting. And then the idea is, okay, if they both stay silent, they’re gonna have a really short jail term, let’s say they’ll be in jail for a month. If they both rat the other one out, they’ll have a slightly longer jail term, maybe they’ll be in jail for three months. If one of them rats the other one and that one stays silent though, the one who rats is gonna go free and then the one who stays silent will have a very long jail term. They’ll get blamed for everything, so maybe they’ll be in jail for a year. So that kind of creates the payoffs of this game. And the basic payoff structure, which you get out of this little story is that the very best thing for anyone is to be the one who ratted while the other person stayed silent.

0:13:40 CO: But you still prefer to both stay silent than to both rat. So basically, there’s a payoff structure that incentivizes you to defect, but creates a situation where if you both defected, you would prefer that you had both cooperated, so that’s the dilemma of the game that it’s individually rational to choose to defect, but from a kind of social level you can all do better by not all defecting.

0:14:05 SC: And I guess… So one of the things that leads to a voluminous literature for this very, very simple game to state is that you can imagine either this was truly just a once-off, you play the game, and in that case, I think that just defecting is the right thing to do, but you could also imagine that you’re just faced with situations like this over and over again, where maybe some cooperation would help you out in the long-term.

0:14:29 CO: I think the heart of why people are so interested in this game is that in the end, it’s an analysis of altruism, so if you cooperate in the prisoner’s dilemma, you’ve done something that will lower your payoff, but increase the pay off of the other players, so you’ve done something kind of materially, inherently altruistic, and we see humans and animals in the biological world do altruistic things all the time, even though the kind of basic structure of the game tells you that’s not individually rational, it’s not rational to harm yourself. So the question is, Well, if that’s not individually rational, why do we see so much altruism? And then there are a few answers, one is the one you just brought up, that we’re in repeated interactions with others, and that really actually changes the structure of the game once you start meeting the same individuals again and again for this kind of interaction, it can change what’s rational, it can make it rational now to cooperate on the assumption that cooperating now will get them to cooperate with you later.

0:15:28 CO: And then there’s other things like if you’re looking from a biological perspective, you can see that being altruistic with your kin can end up being selected for… So the genes that would make you altruistic with kin can be selected basically, because when you’re altruistic toward your kin, you’re being altruistic towards people who are often other altruists or actors that are other altruists. And so altruists are getting these kind of benefits by dint of associating with each other. That then defectors aren’t getting. So I think that is really the heart of why the prisoner’s dilemma is so fascinating, is it helps us answer these questions about altruism.

0:16:06 SC: And also the last point that you just raised, what it brings up questions of The Selfish Gene or levels of selection kind of thing, if I’m altruistic toward my kin in some sense I’m being altruistic toward my genome, I’m helping it survive. Is that a valid way of thinking?

0:16:24 CO: Yeah, so the way you think of it is, from an individual perspective, altruism might not look rational. Why would I do something for my kin that’s gonna essentially make them more likely to have more offspring and me less likely to have more offspring? But then if you flip to the gene’s eye view, so this was an idea largely popularized by Richard Dawkins from a gene’s eye view, if I’m a little altruistic gene, I would like to benefit other genes just like me that it’s… That the altruistic gene is the point of view we look at, it makes complete sense.

0:17:01 SC: And yeah… And so are you gonna come down on which is right, or they’re all right, or different circumstances call for different ways of thinking?

0:17:11 CO: Between what?

0:17:14 SC: Between…

0:17:14 CO: Oh, these perspectives?

0:17:14 SC: Yeah.

0:17:14 CO: Yeah, I tend to be the different circumstances require different kinds of thinking type of person when it comes to thinking about evolution.

0:17:27 SC: It gets very heated, you know, these arguments between what levels of selection are valid, especially group selection, I know is extremely controversial in different circles.

0:17:37 CO: Yeah, group selection is very controversial. For people who don’t know what group selection is, it relates to the idea that in some cases, you wouldn’t wanna think of selection as happening on the unit of an individual, but of a social group, that maybe if a whole social group does well, by having certain traits that can explain why we have those traits. Now, this debate has really used a lot of game theory, so John Maynard Smith did these very influential models, game theoretic models, where he showed that it’s quite hard to get group selection of a certain type to work.

0:18:17 CO: And that’s group selection for altruism. So a lot of people thought, “Well, maybe because altruism is good on a group level, that’s why we see it, groups that are altruistic tend to be selected.” He showed that that… It’s hard to get an altruistic group to be selected. Now… And so a lot of people sort of took that criticism and variations of that to imply that group selection isn’t important, period. But I think there’s just sort of a confusion in the literature where people have shown group selection probably isn’t the most important explanation of altruism, but that doesn’t show that group selection isn’t important for the explanation of other kinds of traits, necessarily.

0:18:58 SC: Okay, okay. That does make sense. And it is a bit of a… Sorry, distraction. I got off on it. It is I know that people really care about this, so we should bring it… If we bring it up, we should say something about it. But let’s just do a little bit of the lingo that is used in this game theory context. So you mentioned that Prisoner’s dilemma… Maybe it’s worth explaining what a Nash equilibrium is. And I don’t know if you also wanna give other examples of games. The Prisoner’s dilemma is not the only game that models actual dynamics.

0:19:26 CO: For sure. So, Nash equilibria… In game theory, you build a game, you get this model of a strategic scenario, and then the next thing you do is analyse it. And what people do, is apply what are called solution concepts to the game. There are a bunch of solution concepts, but the most widely used one is the Nash equilibrium. And the idea is you’re looking for sets of strategies where, when actors are playing them, none of the actors wanna change and do something else. There is no incentive for them to switch strategies. And those sets of strategies are Nash equilibria. The reason that this is an important solution concept, is that if you are out of Nash equilibria, no one wants to switch. It’s approximately a stable set of behaviours.

0:20:11 SC: Right.

0:20:11 CO: And so that means the Nash equilibria predict what actors will do in games and there has been… Laboratory experiments don’t always confirm that people do play Nash equilibria, but they often play them, or something like them, when you have them actually play games. So that’s why those are so important. And if you’re doing a game-theoretic analysis, the first thing you do is build your game, whatever model you’re looking at. And then the next thing usually is calculate the Nash equilibria of the game. See what they tell you.

0:20:42 SC: So for example, in the Prisoner’s dilemma, the issue is that the best for both players is to cooperate, but the Nash equilibrium says that they both defect, right?

0:20:54 CO: Right.

0:20:54 SC: They do better individually. I mean, the equilibrium relies on the fact that they’re making their decisions individually.

0:21:03 CO: Yeah. So, right. That’s exactly it. Defect effect is the only Nash equilibrium of the game, and so that’s the kind of prediction for what rational agents would do. But then there’s this benefit to not playing the Nash equilibrium in that game, just a straightforward benefit to everyone playing. So you mentioned other games. There’s so many other fascinating games. I actually don’t work really with the Nash… Oh sorry, with the Prisoner’s dilemma myself, because it’s been so analysed. [chuckle] Just everyone is on everything you can imagine to the Prisoner’s dilemma. And to me, situations having to do with coordination are really fascinating. So I’ve done a lot of work more on coordination-type scenarios. And these are scenarios where there’s common interest between the players, and what I mean by that, is that they’re gonna get shared payoffs. And in coordination games, the way they get shared payoffs is by somehow coordinating behaviour.

0:22:03 CO: But then the kind of thing that makes the game interesting, is that there are multiple ways to coordinate behaviour. And the question is, how are the actors gonna settle on one way or another? How are they gonna solve what’s called a coordination problem, and learn to make their behaviour work together? And this is especially interesting when we look at more of a societal level. How does the whole society come to learn to coordinate. And then under that umbrella, are what are called bargaining games, which have to do with resource division or splitting stuff. And the reason these games are so interesting to me, is that humans are in situations all the time where they’re dividing resources, whether it’s literal resources like money for your salary, or whether it’s time and effort. So for example, if you’re an academic on a joint project, to get it done, you have to somehow divide the work necessary to be done, and that’s a bargain. And it takes a type of coordination too. And so that’s another set of games that I’ve been really interested in.

0:23:04 SC: Is there some kind of classification scheme for at least all two-person games?

0:23:10 CO: There are names to different games, and games get a name based on the kind of ordering of the payoffs and what that means for the equilibria of the game. So the Prisoner’s dilemma, is always gonna have the structure that we’ve described where there’s a strategy that’s rational to take, it always benefits you, but we’ll get lower payoffs for both actors than the other strategy. A coordination game will have at least two strategies, maybe more, and a kind of situation where if actors manage to match their strategies in the right way, they get better payoffs than if they don’t manage to match their strategies in the right way.

0:23:47 SC: So maybe an explicit example would be helpful. Is the Stag hare game an example?

0:23:54 CO: It’s a kind of coordination game so…

0:23:56 SC: I should let you pick rather than suggesting ones.

[chuckle]

0:24:00 CO: I’ll just give you a totally… Here’s the basic coordination game. You have two strategies, maybe it’s A and B. And let’s say these represent driving on the right side of the road and the left side of the road. And you get payoffs if you both pick A, which is both driving on the right side. You get good payoffs if you both pick B, which is both driving on the left side. But if one of you picks A and one picks B, you don’t get good [chuckle] payoffs because you crashed into each other. So there’s a super simple example.

0:24:28 SC: And a physicist would say it’s an example of spontaneous symmetry breaking because it didn’t matter whether we picked left or right, right? But picking the same one for everybody is definitely advantageous.

0:24:39 CO: Right, exactly. And so you can ask why in certain societies, or how do we break the symmetry and come to different conventions where we all do something that helps us coordinate our behaviour. But ultimately, it could have been something else. There are lots of other things we can do in some situations. In this case, there’s just two options.

0:24:58 SC: Yeah.

0:24:58 CO: But if we look at, say, what time we start work in the morning? There’s actually a lot of options we might have had, and societies have to somehow come to a convention about when they’re gonna do that. If they’re gonna coordinate things like when is daycare open, when are restaurants open for lunch… Well, back when daycare and restaurants were open.

[chuckle]

0:25:18 SC: Yeah, we imagine. So look, they’re gonna be listening to this podcast a 100 years from now. So we gotta at least speak in universal terms.

0:25:24 CO: Oh yeah. I probably… Yeah. [chuckle] Yeah.

0:25:30 SC: There might not be restaurants any more, but for different reasons. But you mentioned, when you talk about the times that we start working, or the times that we have dinner, there’s clearly coordination going on and different societies answer that question differently. This is probably leaping ahead a little bit, but how much do we actually compare to data in this field? We build a model based on imagining a certain dynamics arise from certain games and then we test it empirically?

0:26:00 CO: It really depends what you’re trying to do with the model. And so this is a big picture thing that I’m always trying to convince people about models. A lot of people will look at simplified models like games or game-theoretic models and say, “Well, this is so simple, what can you even do with this? What can it tell us about the real world?” I think the answer totally has to do with, what are you doing with it? What are you trying to find out about? How are you using it? What inferences are you making on the basis of it? And then you also see, that depending on what kinds of scientific inferences people are trying to make, what they’re investigating, they use games very differently. So sometimes people will build models and compare them to data sets of what real humans do. Sometimes people will look at a human behaviour, like the fact that, say, everyone in India drives on the left side of the road, and everyone in Canada on the right side, and say like, “Okay, well, can we explain this cross-cultural diversity but inter-country regularity? Well, maybe we can use coordination games to improve our understanding of patterns.” Like that. And in that case, you would be not comparing specifically to a very detailed data set, but essentially looking at some empirical phenomena and then making some comparison between what you see from the model in that.

0:27:19 CO: And then sometimes people use games to do almost kind of proof of possibility modeling. Someone will say, “It’s impossible that X… ” And someone else will say, “Well, no. I can build a game theoretic model that shows that is perfectly possible.” So to give an example from philosophy, there is this tradition of natural language skepticism. Quine was someone in this tradition who argued you can’t evolve language naturally. Basically, the idea of being like, if you wanna have a convention for language, you need language to establish it, to tell each other, “Okay, well, tree is gonna mean tree.” But modeling work by David Lewis and then Brian Skyrms showed that, well no, you can. You can have extremely simple agents learn how to coordinate on a signal to mean something without needing any sort of prior language. So that’s modeling where you’re just showing a proof of possibility when someone has made an impossibility claim. So, yeah. Does that make sense? It’s used in very different ways and has very different connections to data across different cases.

0:28:28 SC: Yeah, actually, I think it makes perfect sense. In some ways, there’s a sort of the model building aspect for some particular empirical phenomenon you’re trying to understand. But there’s also just the theoretical aspect of understanding what it means to be rational. You might say like, “Well, there’s no rational way to do this,” but then when you dig into the details of people playing by the rules of this game, you see some emergent phenomenon that you might not have guessed.

0:28:52 CO: Yeah, and I… Also, here’s another one, another role that I think often game theory can play. You’re looking at certain cultural patterns and you’re trying to reason about them. And then sometimes it plays a role as an aid to reasoning. It’s not that you’re doing something much different than you would be doing just thinking about, “Okay, what do I see in the data empirically, that humans are doing?” But it kind of helps you organise your thoughts. So I think of Cristina Bicchieri’s work on norms, which uses a lot of game theory as falling under that kind of category. It’s not that she’s doing these super complicated analyses, but she’s saying like, “Okay, well, what if we used a game to think about norms this way, and what if we thought of norms as changing the payoffs of the game in that way?” So that’s kind of another way you can use games.

0:29:41 SC: Right. And in particular… There’s just so much to talk about here, I’m sort of tongue tied about it, but…

0:29:47 CO: [laughter] I know.

0:29:47 SC: You did say we’re gonna talk about evolution, so what is the particular way that an evolutionary biologist would think about the evolution of different traits using this kind of game theory technology?

0:30:00 CO: Yeah, good. And so this really starts to get into the area that I work in more, which is called evolutionary game theory. And it’s related to other kinds of evolutionary modeling. So this area really started taking off in biology, in large part due to the work of John Maynard Smith. And the idea was something like this, “Okay, we have these games. They’re representing strategic scenarios.” Mostly the way people had been analysing games up to that point, was assuming, “Okay, the actors involved are rational or semi-rational. They’re gonna sit down, they’re going to engage in some kind of deliberative process and then decide what strategies they’re gonna play or what actions they’re gonna take.”

0:30:45 CO: And biologists thought, “Well, we have animals, organisms of all sorts, engaged in strategic interactions all across the biological world. They’re bargaining with each other, they’re coordinating with each other, they’re signalling to each other, they’re behaving altruistically, pro-socially, non-pro socially, but most of them, we know, aren’t sitting down and engaging in a rational deliberation.” Instead, a lot of their behaviours have evolved. And so the idea behind evolutionary game theory is, you instead take a population of agents, you assume they have certain behaviours that they might play in a game. So maybe some of them are altruists and some are non-altruists or whatever it is. And then you ask, “How does this group evolve and cannot explain things about strategic behaviour in the biological world?”

0:31:33 SC: And so I know that in individual games of incomplete information, like poker or even just like a sport out there on the field, where you don’t know what the other players are gonna do, mixed strategies are often the best ones to use where you have a certain percentage of the time you do one thing, a certain percentage you do the other. But I presume that in a complete information game, there’s something that you should be doing. But then you… When you introduce the population idea, does it become again, something where the equilibria are more subtle, where a certain fraction of the population does one thing and a certain other fraction does another thing, even though they’re both… It’s clear that one is overall better, but it’s also better that there’s a dispersal of strategies.

0:32:15 CO: It can depend on the situation. So first of all, I’ll point out that sometimes when you have games with complete information, it’s still best to mix up your strategies. So if you look at rock paper scissors, even though everyone knows exactly everything about how that game works, your best thing is to randomly mix among your strategies.

0:32:31 SC: Fair enough. Yes. That’s true.

0:32:33 CO: So I’ll just point that out and then… Oh sorry. What was the…

0:32:37 SC: I’m just asking, is there analogy between that incomplete information and the population dynamics, so that you get the equivalent of a mixed strategy in the sense of different members of the population doing different things.

0:32:48 CO: Alright. So we’re sort of like diving… It’s not the general introduction that we’re diving into, like really…

0:32:54 SC: Yeah. That’s why we’re here.

0:32:55 CO: More theoretical stuff. Right. So when you’re looking at individual behaviours, you have these mixed strategies where sometimes the best thing to do, is sometimes do one thing and sometimes do the other thing. And that can kind of confuse your opponent, or make it impossible for them to predict what you’re gonna do, and then force them to behave in certain ways. Now, in biological populations, you can have an equivalent of that, which can work in different ways. So one thing is that you might have part of the population that behaves one way all the time, and then another part that behaves another way, but all the time. And so that’s a kind of thing where everyone has a set behaviour or strategy, but there’s a mixture or variation, if you look across the whole population. Then you could have another situation, which is that everyone kind of learns to have some mixed types of behaviour, or many individuals end up having some mixed or, sorry, evolved to have some kind of mixed behaviour. So for example, escape behaviour is often quite stochastic. The bunny doesn’t run right all the time, or left all the time. They mix, right?

0:34:06 SC: Yeah.

0:34:06 CO: And so it’s not that there are some right bunnies and some left bunnies. [chuckle] Instead all of the bunnies have some randomness in their strategy.

0:34:14 SC: I guess what I have in mind is something like… You know about Richard Lenski’s long-term evolution experiment?

0:34:21 CO: I don’t know. Is this the one where they keep doing the bacteria…

0:34:26 SC: Yeah, they have these bacteria, and they just breed them in generations and generations and generations and generations, and throw away most of them at every step, but keep a certain fraction. And what they found, and it’s in exactly the same culture, whatever it is, food source that they have, and some small fraction of the bacteria evolved away to eat a different kind of sugar than the other bacteria did. And what was interesting was that this new strain, the sugar that they were eating was less efficient. It was less good as a food source, but of course, there was also less competition for it, since all the other bacteria were eating something else. And I’m wondering if there’s a game theoretic explanation for that.

0:35:09 CO: Well, so I don’t think that would probably be best handled using a game, and the reason… Well, usually when you’re thinking about a game, you’re thinking about, right, an interaction between two organisms where they have different choices and they care what the other one does. This probably, you’d wanna use a model that was less game theoretic and more about utilizing underused resources if you wanted to think about this kind of scenario.

0:35:38 SC: Okay. So I guess, yes, there’s limitations on how far the game theoretic paradigm works here. Maybe you could sort of force it in by saying that there’s two strategies eat this molecule or eat that molecule or something like that, but that they…

0:35:51 CO: Yeah, and that the whole population would wanna have a mixture, but… Yeah, I think it’s not a sort of paradigm case where you’d wanna… Where game theory would be the best way to go to represent that population, I don’t think.

0:36:02 SC: If it’s like the rest of science, I’m sure that the game theorists just think that they should just use game theory for everything, right? Like when I…

[laughter]

0:36:07 CO: We hammer an awful lot of nails with the game theory hammer, whether or not it’s the best way, for sure.

0:36:15 SC: So the other terminology I want to get on the table was that of an evolutionarily stable strategy. So this is related to, but not the same as the Nash equilibrium for these games.

0:36:26 CO: Yeah. There are formal relationships. So this is kind of a solution concept for an evolving population. And the idea is to ask, “Alright. If we have… What strategies where… ” Sorry. “What are the strategies, where if the whole population is playing that strategy, it’s stable to invasion by other strategies?” So if we introduced a little mutant who was doing something else, would this population still be stable? Would that mutant die-off? And so there’s a set of conditions that you can ask about payoffs that will tell you whether a certain strategy is evolutionarily stable. So things like, does it do better against itself than other things do against it? If so, it’s gonna be stable because other variants or mutants are gonna die off when they enter the population, ’cause they just won’t do as well against this kind of dominant strategy that exists. Or, if mutants do kind of equally well against the existing strategy, how do the mutants do against themselves versus the kind of dominant strategy? And so you can analyse whether our strategy is evolutionarily stable by asking questions about these relationships between the payoffs that different strategies get.

0:37:37 SC: Is it. This might be asking too much, but is it possible to give an example of where the Nash equilibria are or are not evolutionarily stable?

0:37:47 CO: Yes. So… Oh God, I don’t wanna… Okay, there are conditions that one could accurately say about the relationship between these, but I don’t wanna say it and mess it up, it’s the kind of thing I always look up on Wikipedia every time. But I’ll give you an example where the Nash equilibrium is the evolutionarily stable strategy, just the prisoner’s dilemma in a totally basic population is also the evolutionarily stable strategy. So if you have a population without adding extra bells and whistles, it will just evolve to always all defectors.

0:38:27 SC: And because we can just easily see that because, if somehow they manage to coordinate and all become co-operators, they’re doing better off but one defector wanders into the room and individually does better off, and then everyone goes, “Ha, defecting might be better after all”, or something like that.

0:38:42 CO: Right. Or the defectors in the kind of biological situation would start to have more offspring.

0:38:48 SC: Yeah, okay. Well, I guess good. Maybe we should be more specific about that. How do we reconcile or marry the usual way of thinking about evolution where we have fitness and replication rates and survival rates to the Game Theory lingo in this way?

0:39:07 CO: So I’m gonna pull this apart into two things. When we’re looking at evolutionary Game Theory, some models are really thinking about evolution by natural selection and representing that, and so the idea is you have these actors engaged in strategic scenarios, and then the pay-off they get is some kind of benefit that allows them to have more offspring on average. And so you say, “Okay, next generation, the ones who did well in these strategic interactions are gonna have more offspring, there’s gonna be more of that kind of behaviour in the population.” And so that will determine a evolutionary trajectory towards increasing the prevalence of strategies that do well.

0:39:49 CO: Now, people also use evolutionary models to think about cultural evolution and even individual learning within a lifetime, and there you’re making slightly different assumptions, but your assumption might be something like this. There are different possible driving behaviours, some people might try out the right side or the left side and we’d wanna be thinking about way back when cars or probably actually earlier wagons or horses were first being used on roads. So people might be trying different directions and people might learn that, “Okay, when I go to the right, that tends to do better for me and so I’m gonna stick with that behaviour more often.” So I might learn based on the environment that I’m in to kind of go with one strategic situation, so I’m not literally having offspring who go on the right side of the road, I’m learning. Then I might be passing that learning on to say, my offspring or my friends, so there might be some kind of cultural transmission or imitation of successful behaviours.

0:40:47 SC: Well, yeah, so this is important because in the real world, at least in the cultural world, we can not only sort of see the situation we’re in and try to optimize for whatever reward you want, but we can also talk to people, [chuckle] we can signal, right? And so part of the fun of this approach to evolution is talking about the emergence of signalling and cooperation and things like that.

0:41:12 CO: Yeah, yeah. Exactly right.

0:41:16 SC: Do we talk about the robustness of how certain signals develop and it’s useful? And then you can start saying, “Well, what if someone starts lying or being deceitful somehow?”

0:41:29 CO: Yeah. So this is sort in the book that prompted this discussion, Games in the Philosophy of Biology. Basically the first half of it covers signalling behaviours of different sorts and especially the evolution of signalling behaviours as they’ve been analysed by philosophers of biology and different biologists. Now, sometimes you can apply these models to human language, though you have to be very careful because they’re really pretty simplified models to think about the complexities of language, so you wanna apply them to think about certain particular aspects of language. More often, people are applying them to think about signalling in the biological world, like the peacock tail or the large jumps of the antelope when they see a predator and things like that.

0:42:26 SC: Right. And so, does it work? Do we think that the understanding of why… There’s a lot of goofy things out there in the natural world in terms of things that look like tremendous wastes of energy and resources, but that we think of as serving the purpose of sending signals. Is that something that Game Theory has explicated for us, or is it sort of an aspiration, are we skeptical?

0:42:50 CO: Yeah, it has. Well, so usually you wanna break up this literature on Game Theory and signalling approximately into two big camps, and one is common interest signalling, signals when the two individuals involved want the same thing. So this might be like, my husband is at the beach and he wants me to come and bring some stuff to the beach, and we’re talking on the phone and I say, “Well, how’s the weather?” And he says, “It’s really sunny,” and I’m like, “Okay,” and then I pack a big sun umbrella. So we both want the same thing, we want the right items there at the beach. So that would be a common interest type of scenario.

0:43:29 CO: A conflict of interest type scenario is the other area that you wanna separate off, where the sender and the receiver of information may or may not have the exact same interest. So a classic case from economics has to do with hiring in the job market where a company is trying to hire the best candidates, but every candidate wants to be hired. So some candidates have interests that line up with the company, they’d both like for that person to be hired, but some candidates don’t. They might not be a very good candidate, but they still wanna get hired, and so we break signalling up into approximately these two different camps. Then we can explain and understand an awful lot of things in human social world and the biological world using models in these two areas.

0:44:18 CO: So if we think about the common interest models, they really make clear why signalling is going on basically all the time. I mean, every… I shouldn’t say every organism, I don’t know, but stuff all the way down to little tiny bacteria and yeasts and plants, and of course insects and all the little wiggly stuff in the sea are signalling to each other, and the reason is that there are all these benefits to being able to coordinate your action and often the only way to do it is to somehow send information between organisms. And then there’re… There’s lots of more details of how we can use these models to think about the different ways that we all engage in common interest signalling. And then this other area, the conflict of interest signalling, helps explain some of the stuff that you were just talking about. Why do we see these tremendously weird seeming adaptations like really long tails, or if you ever look at the face of a turkey, it’s just a mess, like this weird blue goo thing all over its face and neck. Why? Or why do mantis shrimp have 16 different kinds of cones to see a bajillion colors presumably? We can help explain a lot of those things using conflict of interest signalling theory.

0:45:49 SC: And this does seem like a field that is potentially ripe for Just So Stories. So you can come up with an explanation based on Game Theory for this or that weird behaviour, but then how do you know if it’s the right one? How do we test these ideas?

0:46:07 CO: Right. Well, Game Theory is in part the answer to how we know. In many ways, evolutionarily modeling, including evolutionary Game Theory helped throw out a lot of Just So Stories in biology and provide at least some sort of way to constrain what sorts of stories or narratives people could tell about the evolution of certain traits. So going back to group selection, before this work on group selection and altruism, a lot of people would say, “Oh well… ” They’d say things like, “The reason we have senescence, death of organisms before they had to die is so that the younger birds can live their lives and have more food,” or whatever. So that’s a very speculative narrative. If you actually take a model and put numbers on what are the benefits to birds from dying young and dying old and how might that translate into offspring, you have a way to think more rigorously about evolutionary processes.

0:47:12 SC: Is it also possible then to sort of discuss… I don’t know what the word is, some sort of fragmentation of populations into different groups, where some groups say, “Well, we’re gonna have one strategy and we’re gonna cooperate with each other, and we’re gonna signal to each other so we know whether or not you’re in the group.”

0:47:32 CO: Yeah, so we do see that kind of thing happening, especially I would say in human groups. So fragmentation or speciation can happen for different reasons. So one can be that you’re just in different locations, right? That these organisms are over on an island, and then they might evolve to have different behaviours than these ones over on the mainland as a kind of accident of history. Or as you’re pointing out, sometimes you can have situations where you can have some kind of signal of in-group membership that identifies members of a population and gets them to treat their own group differently than another group. And people sometimes think that’s a way to think about in-group signalling in human groups, you know, that different cultures or different tribes or whatever might dress in different ways, engage in different sorts of behaviours, and that would be used as really relevant information about how you can interact strategically with someone who displays in a certain way.

0:48:34 SC: Yeah. Let’s just boldly hop right from biology to human beings and how they deal with each other. I’ve heard it claimed that signalling plays a much more important role in human behaviour even than we might guess. Like the only reason people go to college is to signal some achievement or the only reason people wear the right clothes or go to the right movies or whatever. Do you think… Is that something we can analyse Game theoretically?

0:49:00 CO: Yeah, well, so I’m guessing these claims really relate to this literature I was mentioning on conflict of interest signalling. So there’s this famous paper, they’re looking at job seekers where they say, let’s suppose there are high quality candidates and low quality candidates for jobs and the company wants to identify the high quality candidates. But how can they do that, given that everyone’s gonna portray themselves as high quality? Well, they say, “What if it costs more for the low quality candidates to go to college, it might be more work for them,” is the idea in this original paper. Then maybe we can have a situation where only high quality candidates will be willing to put in the work to go to college and so the diploma isn’t valuable to them because of the knowledge they gain, but rather as a signal of their ability to pass classes and things like that. And so people have applied that kind of thinking to various things in the human realm, where people do things that are very hard to do and then it can become kind of a signal of their abilities to others.

0:50:08 SC: I guess in some sense it’s bringing into question what it means to be rational, like the… You’re rational in some sense, you’re playing a game, and you’re getting the reward that you want, but it leads to behaviour that might from some remove be perceived as less than completely what we would like to aspire to as rational beings.

0:50:29 CO: Yeah, so that’s the really interesting thing about this whole literature on conflict of interest signalling and then using costs to guarantee that signalling can happen. And this really relates also to biology because people bring up the same thing. Like, a peacock tail maybe is reasonable in some way, if it can get the peacock to mate, but it costs so much, right? It’s a huge expenditure of protein, it’s hard to drag this thing around, cannot really be a reasonable adaptation. And I think it depends how you’re thinking about these things. I mean, if you count your social environment as a legitimate part of your environment, then it’s totally reasonable to expend a lot of costs to signal certain things about you that are gonna get you social benefits, or in the case of the peacock, reproductive benefits.

0:51:24 SC: And that’s clearly true that it is part of your environment, right? I guess it’s just, “Well, this is why I’m a physicist”. ‘Cause it’s too hard to disentangle all the different factors that are gonna enter an equation like this.

0:51:35 CO: Yeah. [chuckle]

0:51:36 SC: But you wanna go even further. I mean, I don’t know how much of this is a sort of standard lore, but in your other book, you talk about the origins of inequality, the origins of unfairness, and you point out that we can use Game Theory to explain why it might seem rational to do something that ends up with a deeply unfair division of labour in our society.

0:52:01 CO: Yeah. I mean, I think this is important to understand because I’ve seen analyses before where people will say something like this, you know, if we look at bargaining games and bargaining in a whole society, we should always expect to end up at a fair outcome because nobody’s gonna accept that they get an unfair amount, and so they’re just going to refuse to play and we’ll end up with fairness.

0:52:28 SC: Yeah.

0:52:29 CO: Now, I think that analysis ends up being a terrible [chuckle] kind of analysis because people are rational given the social situation they’re in or they make choices given the social situation they’re in. And this by the way really relates to previous work by people like Ann Cudd and Susan Okin. So you can have these situations where, say, you have two identity groups, these could be men and women, or maybe identity groups of different races, and they’re engaged in maybe some kind of bargaining scenario. Who has to pay more or less, who gets the really prime jobs or not, who can sit at the front of the bus or has to sit at the back of the bus? So there are these kind of bargaining situations they engage in. And you can use evolutionary models to show that you’ll end up at inequitable conventions a lot of the time where one group’s getting more and one group’s getting less. But if you look at the group getting less, at every stage of the way, they were changing, like learning and adapting in ways that made completely perfect sense given the situation they were in. So they’re taking the best behaviours they can to get the best payoffs and outcomes they can given the pressures they’re under from another group.

0:53:44 SC: And so is it just because, again, from a sort of physics perspective, I think about these in terms of energy landscapes and you sort of fall to the valley of a minimum and then you’re metastable there, even though it’s not the lowest point, where there could be some even better vacuum that you just can’t find. Is it an accident of history that you end up in this inequitable division, or was this sort of more globally rational, it’s just sort of an inevitable thing that some people are gonna get screwed by the system?

0:54:17 CO: Well, so here’s the kind of dialectic I go through in this book. I show that if you take cultural evolutionary models, suppose you have a group of people and they’re all bargaining with each other, one of these models, most of the time, fair outcomes emerge unless you split the group into different identity groups. And as soon as you do that, you get inequality emerging and that basically has to do with this way of breaking symmetry. As soon as you have two groups, you have these identity markers where you can say, “Well, I’m type A. You’re type B and type As treat type Bs in this kind of way.” And so that allows inequality to emerge. Now in those models, it’s just an accident of history, there’s nothing about either group that makes one more or likely to get better outcomes. So in that case, it really is just this kind of symmetry breaking type of argument by random chance, these people were being more aggressive in their bargaining early on and these people less aggressive. And then they all learn to end up at this kind of pattern where one group is getting more.

0:55:21 CO: So I kind of start showing, all you have to do is break into identity groups, and you already have this happening. But then of course, if we wanna think about real cases or slightly more realistic cases, there are often going to be certain factors that can help explain why one group or another is gonna end up with more in bargains. So if we’re thinking about gender facts, about physical sex differences are sort of an important symmetry-breaker in determining who’s gonna end up with better outcomes. If we think about cultural groups or racial groups or groups like this, there are gonna be facts having to do with power, economic advantage, maybe with minority status, all of which might matter to which group ends up with more or less when we come to conventional bargaining outcomes.

0:56:21 SC: Yeah, no, I can imagine that… Again, the messiness of the history is really crucial here. But maybe it’s worth digging in a little bit into the technicalities of this claim. ‘Cause I remember I was struck by it in your book that just the existence of different identification markers that say you’re in one group versus another changes what the rational strategy is basically because there’s a new possibility opens up, I will have this strategy against members of my group and this other strategy against other people.

0:56:53 CO: Exactly. And so I think that the kind of really core… One of the most core interesting observations to communicate here is this thing where if you think about bargaining, if you and I are engaged in a bargain, there are many different kind of outcomes we can coordinate on, ones where I get less and you get more, ones where I get more and you get less or ones where we equally split our resources. So in models, these last type, the equal split outcomes are special because you and I can do the exact same thing and perfectly divide our resource. So it’s a kind of special equilibria where you’re being fair to each other. And other philosophers like Jason Alexander and Brian Skyrms and economists like Peyton Young have used this to try to explain why we have norms for justice and fairness, right? So you have the symmetric equilibria that emerge when you don’t have groups and then when you just add identity markers so that I can say, “Well I’m this type and you’re that type.” A whole group of new equilibria become possible which are equilibria where members of one group get some amount more and members of the other group some amount less.

0:58:05 CO: You literally can’t evolve these in the other kinds of models but in models with groups, not only can you evolve that sort of equilibria but they just kind of pop out of the dynamics all the time. And this is, I mean you’d wanna say it’s a super robust finding, it sort of doesn’t matter how you do the model, you still get this happening because of this new asymmetry where you’re able to identify who’s of what sort of group and then treat them differently.

0:58:29 SC: Is it worth actually sort of giving it details about an example game in which this would be the case?

0:58:36 CO: Yeah, so the bargaining game is a perfect example game. So say we have a bargaining game where you and I are dividing something like people often will say a pie. And let’s say here are our strategies in the game, we could each ask for a third, a half or two-thirds of the pie. So that’s a specific game. We have our strategies, we have players. If we imagine a population playing this game, if we have just a population where people don’t have any special identity, you almost always get everyone demanding one half for a lot of the time.

0:59:14 SC: So is there a stipulation that if we both demand two thirds, then we get nothing or something like that?

0:59:18 CO: Oh sorry, yes, I should have been more specific. So in this game, the payoffs are that you get what you demanded as long as you don’t demand too much. So if we both demand two thirds, we’re sort of pushing too hard and then we get a poor payoff for pushing too hard. And so the equilibria of the game are that we can either both demand half, you can get a third and I get two thirds or you can get two thirds and I get a third.

0:59:43 SC: Yeah. Which is an interesting point for those who are not professional game theorists in the audience, like all three of those possibilities are technically speaking equilibria, one is fair, 50-50 but if one person always ask for two thirds and the other one always asks for one third, that also satisfies the criteria for being a stable set of strategies for the game.

1:00:05 CO: Right. And if one person asked for 99% and one person for 1%, that also satisfies the criteria for being a stable equilibria of the game.

[chuckle]

1:00:15 SC: And then so how does this change when you get groups in there?

1:00:18 CO: And so yeah, the idea is when you get groups, the fact that I know you’re in one group and I’m in another can give me this extra information where I can say, “Well, people in that group always demand a third, so when I meet members of that group, I can demand two thirds, I can ask for more.”

1:00:34 SC: Right. And the idea is that this kind of reasoning can actually help us understand for example, how we divide up household chores. Like you, you’re very careful, you said, “Look, there are difference physiologically between men and women on average.” So that’s some part of the explanatory role is being played by that but there’s also just this symmetry breaking that screw some people over.

1:00:58 CO: Yeah. Well so if we look culture to culture, there are cultures that have pretty gender egalitarian norms and conventions where there isn’t a big power imbalance between men and women, where there maybe isn’t a big difference in how much labour they do altogether and then there are cultures that are much less gender egalitarian. And so part of the point here is that these are conventions that emerge in part due to chance or also random historical factors or accidents of history where we end up with groups that are much more or much less fair once we have a kind of division into two genders.

1:01:45 SC: Yeah. And you make the point that, I’m not quite sure how to put this but they get baked in like a tiny little variation in some initial stage can lead to huge imbalances down the road.

1:01:57 CO: Yeah. Part of the point I make is something like this: The first most important role that gender plays in many societies is as a locus for dividing labour. And division of labour is not as important in modern societies as it used to be, so we still divide labour. If I say who in that household is cleaning the windows versus taking out the trash, you’ll make certain guesses based on the genders of the people. But if we look at more traditional societies, it was completely necessary to divide labour ’cause there were so many skilled tasks that required doing that were very hard to learn to do and so having a rule where it’s something like, “Well, women will always be the ones who learn to make rope, men will be the ones who learn to fire ritual pottery, women will be the ones to dig tubers, men will be the ones to build houses, really benefitted groups.

1:02:52 CO: And so you see these kind of conventions emerging where gender becomes this really important locus for division of labour and once gender becomes really important in determining who’s gonna do what, it also then becomes important in determining who’s gonna get how much and who’s gonna deserve what and that’s in part because people get different things based on what they’re doing. The person who controls food production might be in a special position within a family or a society to have more for themselves, demand more for themselves. And then it’s also just because now you’ve created this meaningful social difference where people can look at that difference and be like, “Well, we think that these kinds of people are different than those kinds of people, these ones are the rope makers, these are the ritual pottery makers. The rope makers, they’re the kind that deserve less than the pottery makers. And so you can have these kinds of meanings and conventions attaching to these categories that emerge for other purposes.

1:03:57 SC: Is it a slightly depressing conclusion in the sense that it was not some evil or unfairness in the hearts of people that made this happen but it was just the dynamics of evolution under the constraints of rational game players?

1:04:13 CO: Well I try to be really careful about that ’cause there is evil in people’s hearts sometimes and there’s a lot of discrimination and there’s a lot of really morally wrong discrimination and these results shouldn’t be taken as indicating that any of that is less morally wrong than we thought before, for sure. But I think part of the important thing to realise is that inequity has a life of its own or a tendency to arise via very non-offensive things. So if you have groups of people, if you have social identity groups within them and if everyone tries to learn to do what’s best for them, inequity can just emerge from those pre-conditions which I think helps explain why it’s both so common cross-culturally, why inequity based on social identity is the rule, not the exception but also tells us that it’s quite likely to happen and should require active responses rather than passive ones.

1:05:23 CO: I think there’s an idea in thinking about social inequality like, “We’ll keep trying to fix it and then we’ll get to a point where it’s fixed and then we’re done.” And I think this analysis says that’s not really the right way to think about it because these really basic forces having to do with learning and strategic interactions are gonna push us towards inequitable patterns almost on their own. And so it’s gonna be the kind of thing that you’ll wanna… You’ll have to always be thinking about, always be actively addressing if you don’t wanna have high levels of social inequity.

1:05:54 SC: Yeah. So there is some… But that is the depressing part of it.

1:05:58 CO: Yeah.

[chuckle]

1:06:00 SC: Well there’s this… It’s almost saying that maybe it is saying that an unequal division of resources and rights and privileges in society is the natural thing. It’s not saying who is supposed to be the winners and the losers necessarily but it’s a rich get richer kind of phenomenon.

1:06:21 CO: Yeah, well let’s specify that when we say “natural”, we’d say something like, “It is the thing that emerges naturally” but we certainly wouldn’t wanna give it the moral weight. Sometimes people think natural ‘s the right…

1:06:34 SC: No, no, no, exactly. It’s not the good thing.

1:06:36 CO: Oh, that’s the good thing, right? Or, “We can’t help that, it’s natural.” And we wanna say one thing about it but not the other, right?

1:06:43 SC: The flipside is you can make an argument that people shouldn’t be as touchy or defensive about fixing it. You can just say, “Look, I’m not saying that people were evil. I’m just saying that there was evolutionarily game theoretic dynamics going on and we need to actively take a role to counterbalance that.”

1:07:04 CO: I think it should give us some reason to think like… And I don’t wanna say this about every discriminatory behaviour but some sorts of discriminatory behaviour. Some reasons to think like if I were in that social position, I’d be behaving the same way potentially. So it’s weird to say counter-factually if I had been a man, a lot of things would be different.

1:07:32 SC: You wouldn’t be you, right. Yeah, that’s hard to say.

1:07:33 CO: If I had been raised as a man, I think I very reasonably would have learned… I would have been socialized to expect to get more than I’m socialized to expect to get as a woman and that would have been pretty natural for me. And so it maybe gives some locus for extending understanding to people who are behaving in discriminatory ways on that kind of grounds, that might be something you could take away from this.

1:08:04 SC: Well and also just to go back to the prisoner’s dilemma and the idea that if the two prisoners were allowed to talk to each other and work out a mutual cooperation strategy, then they would… Neither one of them would defect and they would get the best possible outcome. In the real world, we like to think that we can take this meta step back and be rational, and say like, “Okay, well there was this natural dynamics that ended us up in this equilibrium but it’s not in some sense what we want or what we value, so let’s take action against it.”

1:08:36 CO: Yeah, and actually I think that’s much more plausible when it comes to bargaining than when it comes to the prisoner’s dilemma. So actually in studies of the prisoner’s dilemma like people being able to talk and do something beneficial for them, it does not guarantee that people don’t defect.

1:08:52 SC: Yeah.

1:08:52 CO: With bargaining, it’s different from the prisoner’s dilemma in that there are all these different equilibria, right? In the prisoner’s dilemma, the only equilibrium is the kind of bad one. In the bargaining game, the fair outcome is an equilibrium. Now just thinking like we can all sit down and talk about what we want and get something better might not quite get us there. Because if we think about an equilibrium say where those in one social group are getting more and the others are getting less, well the people getting more, it’s really good for them to get more.

[chuckle]

1:09:27 SC: Right. Right.

1:09:29 CO: And so just a conversation like, “Hey, what’s happening isn’t fair, let’s fix it up,” may or may not be persuasive. And if you look at what happens in the real world, this kind of outside model world now, often people engage in all sorts of kind of special thinking or reasoning to justify why they ought to get more in a sort of discriminatory situation, right? To justify the status quo that benefits them and to have some reason why that should be okay. So a discussion alone won’t necessarily…

1:09:58 SC: I’ve no idea what you’re talking about. I’ve never met people who did that before…

1:10:01 CO: Yeah, right.

1:10:01 SC: This is some weird philosopher’s concoction that I’m unfamiliar with.

1:10:05 CO: Yeah.

1:10:07 SC: Let me make sure that I actually… I think that I understand the point of the bargaining game. So we’re dividing the pie and roughly speaking if there’s a known group of people who have… For whatever reason, we know ahead of time they’re gonna demand two thirds of the pie and I meet one of these people, then even though the equitable thing would be for me to ask for half the pie, if I know they’re gonna ask for two thirds, I actually come out better just by asking for one third and that’s sort of the… Is that a simplification but fair origin of this bifurcation?

1:10:42 CO: Yeah, that’s the right kind of thing. I mean you have a… And also sort of socialization over a lifetime plays an important role. You come to expect these people are always gonna demand a lot of me and if I want to get anything, I’m gonna have to make a concession, right? I’m gonna have to be accommodating. And so you learn to behave accommodatingly towards some people or in a more demanding, aggressive way towards other people. Yeah.

1:11:10 SC: And the hope is that in the real world case, it’s a little bit less idealized than that and there are people who all their lives have been getting two thirds of the pie but if you ask them, they will say they’re big believers in equitability and fairness and maybe you can get them to switch their strategy through hook or by crook.

1:11:30 CO: Yeah and we certainly do see this happen sometimes. It’s not the easiest process because I think there is some resistance. If you’re getting all the good stuff, why? There’s gonna be some resistance to going down to a half. But we do see these processes happening where there are these educational processes that make clearer to men or clearer to White people like you’re getting too much and you are also a person who holds to fairness norms. Most of us are in our society, we think we ought to be fair that that’s the right thing. And so if you can really convince people you are getting too much, you’re behaving in a discriminatory way, sometimes it can change people’s behaviours.

1:12:13 SC: Sometimes there’s a very common strategy that says, “Look, just being fair to everyone helps everyone, right?” Discriminating against women or minorities or whatever is really hurting society as a whole but at least in principle it’s not necessarily true, right? It might just be there are finite resources and they’re being distributed inequitably and we’re making the big ask of asking people or groups of people to say, “Give up some of what you have in the name of fairness.”

1:12:43 CO: I think that’s very insightful. And so I understand arguments like feminism is for everyone and I think to some degree there are truth to that which have to do with things like the way people are constrained into certain roles based on their gender or even perhaps their race. Like if you’re a man, traditionally you’re not supposed to love, caring for an infant or spend a lot of time with your infant, right? And so that’s a kind of harm to you that you’re being taken away possibly from a role you might really like to play. And so in that sense, a movement like the feminist movement is for everyone. But I think it’s really too simple to say fairness benefits all of us and it ignores a lot of the reasons why movements towards fairness like civil rights movement or Black Lives Matter movement or feminist movements meet so much resistance because people aren’t dumb, you know?

[laughter]

1:13:43 SC: I’ve been winning this game for years.

1:13:43 CO: They know… They know that leads to trouble, what’s for them.

[chuckle]

1:13:47 SC: Well I mean… Okay, maybe you’ve already answered this one but maybe just to wrap things up. Does this kind of analysis suggest strategies for making the world a more equitable place? I don’t wanna end on like saying, “Well, it’s just a law of nature that we gotta stick with it.” We can appeal to people senses of fairness but can we in some sense change the game that we’re playing so that it just becomes more rationally self-interested for everyone to be more fair to each other?

1:14:13 CO: Good. Even though there’s some kind of like depressingness of the lessons from this book, I certainly also would like to emphasize we see cultures do better, become more fair and then we see a cross-culture, some cultures that are much more fair than other ones. So it’s not like it’s just hopeless, right? There are definitely ways that groups of people can be better with respect to equity. So thinking about like the modeling in this book and some of the things it tells us about doing that, one thing that I think is kind of important to realise is that norms can be eroded before you ever see change happening. So norms and conventions tend to be very stable because if everyone in society is adhering to them, there are these kind of forces having to do with payoff that means everyone else is kind of pushed towards adhering to them. So if everyone in one group is demanding two thirds, that makes it… There’s always this force pushing everyone in the other group to demand one third, right?

1:15:14 CO: But you can have situations where people stop liking a norm before they stop adhering to it. Everyone’s still doing it but you can have people becoming dissatisfied with it, learning more about it, coming to believe that it’s injust. A related lesson to this is that you don’t see change happening until actually people change their behaviours. So you can talk a lot but at the end of the day, you’ve got to do something different for a norm to change and this can mean people who are disadvantaged making higher demands and just messing things up for everyone for a while ’til everyone else adjusts or it can mean people in an advantaged situation conceding more, changing what they’re demanding, saying, “I’m gonna give more,” until we end up at a new norm. You usually see the first one more than the second one but… So I think that’s an important thing to think about. “Okay, we can have people changing their minds about things but the behaviour can still look stable for kind of a long time.” And then if we want real change to happen, someone has to actually make that change, change their strategy, do something different.

1:16:28 SC: Okay Karl Marx, I thought that the philosophers are just trying to describe the world and here you are suggesting that we actually try to change it as well. I mean that’s a pretty radical move.

[chuckle]

1:16:36 CO: Yeah, right.

1:16:38 SC: But I am hopeful. I’ll tell you how much of an optimist I am. I like to think that these kinds of analyses, philosophical or biological or whatever actually do help change people’s minds. I think it’s very hard to tell. I think that the effects of new ways of thinking are just so diffused and so delayed in their impacts that you might never know but I actually do think it matters.

1:17:02 CO: I hope you’re right.

[laughter]

1:17:07 SC: We’ll do our part to spread the word. So Cailin O’Connor, thanks so much for being on the Mindscape Podcast.

1:17:12 CO: Alright. Thanks for having me, Sean.

[music][/accordion-item][/accordion]

5 thoughts on “113 | Cailin O’Connor on Game Theory, Evolution, and the Origins of Unfairness”

  1. Pingback: Sean Carroll's Mindscape Podcast: Cailin O’Connor on Game Theory, Evolution, and the Origins of Unfairness | 3 Quarks Daily

  2. Game theory, as described here:for two non human units to assess situations of biologic exchanges, premised on evolutionary Darwinian-ism. It feels, …, like running away from actually living. A sterile description from a sterile perspective. I prefer love, hate disgust, triumph and living.

  3. Christine K Senecal

    Really enjoyed this episode, and how it adds an evolutionary game-theory lens to the study of history!

  4. If one group hunts several deer more, because are smarter, that manage to get through the winter and work the land from morning till night to cultivate the land and have a higher yield than the other groups, does “fairness” mean they have to share with the other groups who do nothing all day?

  5. If there were resources for the whole population, the theory of “Fairness” would be one that would have applicability in human society.

Comments are closed.

Scroll to Top