176 | Joshua Greene on Morality, Psychology, and Trolley Problems

We all know you can’t derive “ought” from “is.” But it’s equally clear that “is” — how the world actual works — is going to matter for “ought” — our moral choices in the world. And an important part of “is” is who we are as human beings. As products of a messy evolutionary history, we all have moral intuitions. What parts of the brain light up when we’re being consequentialist, or when we’re following rules? What is the relationship, if any, between those intuitions and a good moral philosophy? Joshua Greene is both a philosopher and a psychologist who studies what our intuitions are, and uses that to help illuminate what morality should be. He gives one of the best defenses of utilitarianism I’ve heard.

Bonus! Joshua is a co-founder of Giving Multiplier, an effective-altruism program that lets you donate to your personal favorite causes and also get matching donations to charities that have been judged to be especially effective. He was kind enough to set up a special URL for Mindscape listeners, where their donations will be matched at a higher rate of up to 100%. That lets you get matching donations when you donate to a personal favorite cause along with a charity that has been judged to be especially effective. Check out https://givingmultiplier.org/mindscape.

Support Mindscape on Patreon.

Joshua Greene received his Ph.D. in philosophy from Princeton University. He is currently Professor of Psychology and a member of the Center for Brain Science faculty at Harvard University. His an originator of the dual-process model of moral reasoning. Among his awards are the the Stanton Prize from the Society for Philosophy and Psychology and Harvard’s Roslyn Abramson Award for teaching. He is the author of Moral Tribes: Emotion, Reason, and the Gap Between Us and Them.

[accordion clicktoclose=”true”][accordion-item tag=”p” state=closed title=”Click to Show Episode Transcript”]Click above to close.

0:00:01.4 Sean Carroll: Hello everyone, and welcome to The Mindscape Podcast. I’m your host, Sean Carroll. So the trolley problem, I swear, that just a few years ago, the trolley problem was not of widespread familiarity with people. When I wrote the big picture, I explained the trolley problem and I thought that some experts knew about it, but it wasn’t a widespread cultural phenomenon, which it appears to be today, maybe because of the good place, that TV show, but I’m not sure. Anyway, you know, probably by now, the basic set up, you have a choice between letting a trolley kill five people by running away on the tracks or by doing something saving the five people, but one other person dies because of what you did. And there’s different versions of the trolley problem where what you do is just flip a switch and then one person dies on the tracks or you actually have to shove somebody in front of the train, and when you do psychological quizzes and you ask people about this, they will give different answers on what the right thing to do is depending on the specific action that is called for, even if the outcomes are the same, which is very interesting.

0:01:07.6 SC: So the point of the trolley problem, which was first proposed by philosopher Philippa Foot and then dubbed the trolley problem by Judith Jarvis Thomson, is to heighten our clarity about the competition between two different ways of moral reasoning, there is roughly speaking a de-ontological way. De-ontological morality is based on rules, certain things are right and certain things are wrong, so killing someone is wrong, that’s a de-ontological maxim right there, versus a sort of consequentialist version of morality that it’s not about the rules, it’s about the outcomes of your actions, whatever they may be. And again, very roughly speaking, de-ontology would warn you against making one of those actions that switches the trolley from doing one thing to another, whereas consequentialism would say, save the most people, whatever that would be.

0:02:00.1 SC: So what’s interesting is we all have both of these intuitions and they do compete against each other, so one question you can ask is, forget about what the right answer is. The Trolley Problem is never about saying what the right answer is, it’s about how we reason morally, and therefore you can ask “What about what is literally going on in our brains?” and that is a question that has been tackled by today’s guest, Joshua Greene and his collaborators and in his lab. Josh is a psychologist, neuroscientist and philosopher, and he is interested in this sort of de-ontology versus consequentialism debate, but also interested in what is happening in our brain when we are thinking de-ontologically versus when we’re thinking consequentially, and it’s different parts of the brain that sort of light up, and when you get into this, it’s just intrinsically interesting, but then it has enormous consequences for further questions, not just about individual personal morality, but about tribal cooperation or competition, what Josh calls global meta-morality and so forth.

0:03:03.3 SC: So there is a fascinating conversation, and there’s a bonus that is coming here, Josh thought about what he was doing and realized there were implications for charitable giving. He’s one of these people, they are many, and including myself, who think that there are more and less effective charities to give to, but then there are also charities that tug on our heart strings directly. So he helped found a program called Giving multiplier. Giving multiplier will let you have it both ways, they will let you both give to the charity that really is personal and important to you, while also spreading some wealth to those effective altruist kind of charities that just have a huge impact as far as the consequences are concerned. And here’s the bonus, Mindscape listeners get a special benefit here, so they’ve set up… Joshua and his friends, have set up a special code, if you go to givingmultiplier.org/mindscape, they will match your donations to a higher rate than usual. So there’s a good thing to do no matter what, but as a Mindscape listener, you get to do just a little bit more good for the world than usual, so that’s a nice thing.

0:04:15.0 SC: I like to think that listening to Mindscape helps do a little bit of good for the world, ’cause it helps us think about how the world works and make it a better place, but here you can put a cash value on those ideas by giving to the right place, that’s an exciting little thing. So with that, let’s go.

[music]

0:04:48.3 SC: Joshua Green, welcome to Mindscape Podcast.

0:04:50.7 Joshua Greene: Great to be here, thanks.

0:04:52.4 SC: I thought that we would start the conversation by sort of setting out our philosophical convictions on the table here, we’ll get more empirical later on. But a lot of your empirical works seem to be driven by interest in philosophical questions, so when it comes to morality and for that matter, meta-ethics, why we choose our ethical systems, what are your convictions? Where do you come down on utilitarianism, de-ontology, virtual ethics, all of those big questions?

0:05:25.0 JG: Right. So I’ll start with… And honoring you I’ll start in a big picture kind of way. I think of morality first and foremost as a natural phenomenon, so I don’t think of morality as something that descends from on high, from outside the observable universe in a theological way or in some other way, and not only that I don’t… I think of morality as something that comes up from below, so a useful contrast here is with mathematics, where it is plausible to think that basic mathematical truths or complex mathematical truths in some way are completely independent of human minds, that two plus two would equal four regardless of whether or not any humans ever existed, certain basic facts about the nature of entities and their relations to each other, those things just are that way, and humans may accurately or inaccurately perceive them.

0:06:27.1 JG: When it comes to morality, I think of it as a natural phenomenon, as I said, which arises out of evolutionary processes, and the fundamental evolutionary development is the development of cooperation. So cooperation is really the thing that makes life on earth, and if it exists anywhere else in the universe, interesting, that you can have a bunch of organic molecules like RNA and they could just sit there in the primordial soup forever and never do anything, but what makes things get interesting is when molecules start coming together to form a larger molecules, and those form even larger ones that can make copies of themselves, which form cells and so on and so forth, all the way up to multi-cellular creatures and animals and social animals and the United Nations. And morality, as I see it, is a suite of psychological devices that enables creatures like us to reap the benefits of cooperation, so we have… And to a large extent, it’s governed by our emotional responses. We have feelings that make us care about other people, empathy or compassion, and so we can engage in a kind of teamwork that is helpful for survival, we have emotions that make us have negative… React negatively to people who are not so cooperative, we can have their anger and disgust, we can be afraid of how other people might react to us, negative feelings about our own behavior, and we can have positive feelings about other people’s behavior.

0:08:12.8 JG: So we have positive and negative feelings that act like carrots and sticks for ourself and others, and that’s the basic implementation of morality, and it enables teamwork it enables us to survive in ways that we couldn’t if we were all just out for ourselves like sharks.

0:08:31.4 SC: All this sounds very much like what a scientist psychologist would say, and so if I’m… Part of my job here, my role in the podcast guest is to be the gadfly and to push back a little bit, so all of that is telling me that humans behave in a certain way and why they behave that way from an evolutionary biology perspective, but it doesn’t quite yet tell me that they should behave that way. Is that a reasonable distinction?

0:09:00.8 JG: Yes, great. Yes, absolutely. And I like to start with the is because it sets up the odd. Now, what you’re gonna end up saying about the ought, I think depends a lot about your broader picture of the universe, and so we’re on the same page there. I think. So when it comes to… Let’s skip meta-ethics for a second and…

0:09:21.9 SC: Good, we’ll come back to it.

0:09:22.4 JG: Let’s talk about ethics. So the fundamental moral problem for humans in general, is the basic problem of selfishness versus caring about others, and that’s basic morality, those emotional responses that I have described, and the structures built around them solve the problem of me versus us. The modern moral problem is a problem of us versus them, it’s a problem of not otherwise selfish individuals trying to get along in a group, but groups with their own interests and their own values, trying to get along with other groups, and so if morality is the solution to the basic moral problem of me versus you, or me versus us, we need something to solve the problem at a higher level, and so I think of that as a meta-morality, but just as morality enables individuals to live together as a group and cooperate, a meta-morality is something that enables groups to live together productively in a larger, more complex and more cooperative world.

0:10:26.2 JG: So the way I think of modern ethics is what should our meta-morality be? And my view is that the 19th century utilitarians got this right. I really don’t like the term utilitarian, ’cause I think it makes you think of parking garages and it makes… And when you try to explain it in terms of maximizing happiness, it makes you think of smiley faces and things like that. I prefer to describe myself as a deep pragmatist, and I think that that flows naturally from this conception of, “Okay, let’s start with the practical problem with different groups with different values and different interests, how do we resolve that problem?” Now, one way to resolve that problem would be to appeal to some universal objective moral truth. And my view is that there probably isn’t such a thing, but even if there is, we have no reliable access to it, right. So instead, we have to be pragmatic about this, so for practical purposes, I am not a moral realist.

0:11:27.1 SC: Good. Yeah.

0:11:28.1 JG: That is, maybe there is some abstract moral truth that are akin to the truths of mathematics, or maybe there’s a theological version of it, and we just haven’t figured out which of the human versions of theological morality is right or if any of them are, but for practical purposes, we’re left with what we can observe. And so, then the question is, well, yeah, to resolve our differences, we need to make trade-offs, right. Well, to make trade-offs requires having some kind of common currency, if we’re going to look at you and your groups and their values and say, “We want abortion to be legal,” and the other ones like “We think it should be illegal.” Right. To make an all things considered decision about how that ought to play out means to put all of those considerations onto a common metric. Now, you might say that’s unfair, and that’s unreasonable, that’s not how my people think about this, but that is what decision-making requires, you either have to go one way [chuckle] or the other way, and that requires some kind of common currency. And I think that the best common currency that we have is the common currency of experience. And so the fundamental insight of consequentialism and utilitarianism more specifically is…

0:12:42.5 JG: Well, it’s really putting two things together, one is that when you think about the things that you care about or that other people care about, and keep asking, “Why do you care about that?” until you run out of answers, it ultimately, almost always comes down to the quality of somebody’s experience, so I say, “Sean, you’re here at work today. Why did you go to work?” And you would say, “Well, I enjoy what I do, but I also need to pay the rent.” And you’re like, “Well, why do you need to pay the rent? Why don’t you just wander around?” you’re like, “Well, I’m in here, I’m in Boston, it gets cold outside.” and I say, “Well, what’s wrong with being cold?” “Well, it’s painful,” and you say, “Well, what’s wrong with pain?” And then at that point you say, “It’s just bad,” right, you run out of answers. And so, the thought is that when you ask those, why do you care? Why do you care? Why you care, it ultimately comes down to the quality of somebody’s conscious experience, whether it’s yourself or somebody else, or a human or an animal, and we don’t worry about rocks because rocks, as far as we know, are not conscious and nothing is really good or bad for them. So that’s insight number one, is that the quality of experience provides this common currency of value, but then you say, “Well, okay, but who matters?”

0:13:52.5 JG: Okay, so that’s what matters to you, but who really matters. And utilitarianism’s insight, which sounds benign now, but really was quite a dramatic advance in the 18th and 19th century is everyone matters equally, and not even just humans mattering equally, but even animals. So Jeremy Bentham, the original utilitarian, famously said we’re asking the wrong question about animals, it’s not a question of whether they’re as smart as us, the question is whether they can suffer and they’re suffering is just as much suffering as our suffering. And as a result of this, people like Bentham and Mill were way ahead of the curve on things like animal rights, they were early opponents of slavery. Jeremy Bentham even penned one of the first defenses of what we would now call gay rights, and he said, “I know that everywhere around here, people think this is the most terrible thing in the world, but when I try to break it down in terms of the quality of human life, I just don’t see why this is so bad, so it… Maybe it’s not.” And with that thought, in my view, leapt ahead two centuries in moral thinking, so if you take this idea that the quality of experience certainly matters and is our best common currency, and the idea that everybody’s experience counts the same, not that everybody has the same experiences, not that everybody has the same culture, not that everybody values the same things, but nobody is inherently more important because they’re them. Right.

0:15:27.8 JG: Then you get this idea, okay, well, our meta-morality, the most sensible meta-morality is try to make the world have as little suffering as possible, as most happiness or well-being as possible, taking into account the suffering and well-being of all sentient beings equally, and that is the core of it. And when you try to live that, I think what you end up being is what I call a deep pragmatist, that is you are mostly focused on evidence on trying to figure out what’s the best way to do this, working with human nature as it is not as it ought to be and aiming for this higher ideal, but very much engaged with the practical details of the world, so that’s my ethics.

0:16:16.8 SC: Yeah. I wish you luck in the rebranding program, trying to convert utilitarianism to deep pragmatism. And I very much am on board with both the focus on suffering and consciousness experience, and the fact that other people and other kinds of experiences matter. I do wanna push just a little bit before we get into the empirical stuff on utilitarianism as even the version of consequentialism that you might need, or even that you need consequential-ism, there are some standard worries about utilitarianism and I wonder how you address them. The basic flavor of the standard worry is, it sounds like you could destroy one person’s life if you make 100 people’s life better, and I can imagine completely cosmopolitan global meta-ethics that never the less said “Every individual has some autonomy and some rights, and I cannot sacrifice the good of that one person, even if it would make a 100 other people better,” is that… Does utilitarianism emerge as a sort of unique conclusion in your mind from hoping that we can find a fair global meta-ethics?

0:17:28.7 JG: Yeah, so I think that it’s important to look at this in multiple levels, and I think this actually goes all the way back, at least to John Stuart Mill, it has been a recurring theme and other utilitarian thinkers, most notably R. M. Hare. That is, we don’t want people going around thinking that it’s their prerogative to sacrifice some people for the greater good of others. Why? ’cause that would be bad, because the world would end up worse off if that was our first level basic disposition. And this is part of what I think is the pragmatic element here is we have to understand the biases and limitations in human nature, that people are very naturally going to say, “Oh yeah, the greater good. That’s what works well for me. What’s good for America is good for Ford Motor Company,” if you’re the CEO of Ford. We’re likely to be biased, and we’re also likely to overestimate our ability to predict things and our ability to control things.

0:18:40.8 JG: Well, it is a very bad general idea to have fallible biased limited humans in general, going around trying to do the sort of engineering where they’re sacrificing some people for the benefit of others, that’s just a generally okay mode of operation, but now you might say, “Well, that’s kind of… Aren’t you being a little squirrelly here? You just said you have to promote the greater good, and then you just said, but we shouldn’t go around thinking that we’re gonna promote the greater good. But you have to ask yourself, do you think the world would actually be better if people thought like that?” and surely you’re thinking the answer is no, right?

0:19:24.7 SC: Right.

0:19:25.4 JG: Well, if you’re entitled to think in a sophisticated way about the limits of human nature, why shouldn’t a utilitarian, right? So, we wanna get off the table the idea that in general, this is a good way for people to think. Now, with that said, in our more reflective and sober moments, do we sometimes have to make trade-offs? Suppose that there really is an enemy threatening our nation. This is World War II, and Hitler’s regime is working towards the nuclear bomb, is it then acceptable to conscript some people to fight in a war to keep Hitler from taking over the world? That’s sacrificing the interests of some for the greater good, right? And there are many examples of things like this, although they don’t usually involve such a kind of direct sacrifice. So in situations where if you really have to choose, if it really is clear, that if the interest of some are not sacrificed for the interests of a larger number, then it would be acceptable to do that. But it’s very dangerous to have people going around thinking that way in general.

0:20:49.1 SC: Yeah.

0:20:49.2 JG: And in practice, thinking in a more utilitarian way is much more likely to be about giving of yourself for the benefit of other people than it is about sacrificing other people for the benefit of other people.

0:21:03.8 SC: [chuckle] Okay, no, actually that’s a very good point, and I think that I appreciate something that I didn’t appreciate before even having talked to you before about this stuff, so thank you. The pragmatist aspect here softens or at least ameliorates some of the more worrisome thought experiments about utilitarianism gone wrong. And you’re right about… It would be contradictory to say that utilitarianism leads to a result that is actually making people worse-off because that’s the whole point, if you do it right, if it’s not supposed to do that. But let’s contrast it here. I think in my book, The Big Picture for which I interviewed you a while ago, I do the conventional contrast between consequentialist ethics and deontological ethics which are based on rules rather than outcomes, subsequently, several people have convinced me that I should also put virtue ethics as a third alternative on the table there. How do you think about those two other options?

0:22:01.8 JG: Well, let me start with virtue ethics, ’cause I think it’s clearer, or my answer is clearer. I don’t think virtue ethics is really an answer. That is…

0:22:11.5 SC: Maybe you should say what it is to the people who don’t even know.

0:22:14.1 JG: Okay, so virtue ethics sort of is, in the western tradition, it’s most closely associated with Aristotle, but really I think that virtue ethics is kind of the default ethics of human cultures throughout history. So what… The Aristotelian idea is, he’s not giving you a general theory that says, “Here’s what’s right and here’s what’s wrong and why,” instead he is describing, and in some way, endorsing certain kinds of moral practices and encouraging people to engage in them. Right? So he’d say, “Look, it’s all about finding the right balance. You don’t want to be… You don’t wanna be a coward in your life, and unwilling to stand up to the things that threaten you, but you don’t wanna be rash and rushing into danger without protecting yourself. You have to find the golden mean in the middle.” And with all of these things, with all sort of matters of human life, it’s a matter of balancing these competing considerations and having the virtues which achieve that balance.

0:23:20.3 JG: How do achieve those virtues? Well, you kinda look at the people who seem to be doing a good job and you try to emulate them and you try to practice. You don’t just get this from theory, you get this from building habits and making mistakes and learning from those mistakes and so on. And I think all of that is right in a sense, but it doesn’t tell you whether or not abortion should be illegal. [chuckle]

0:23:42.6 SC: Right.

0:23:45.4 JG: It doesn’t tell you when it’s justified or not to invade a foreign nation. It doesn’t tell you how much we should be doing to protect future generations from climate change. It’s just… It’s a nice approach for life in a tribe where everybody kind of agrees on what’s right or wrong, and it’s a matter of trying to live up to those standards. But in a multi-tribal world, it doesn’t really tell you very much because whose virtue is it? Is it Vladimir Putin’s virtues as a powerful strong man leader? Is it Barack Obama’s virtues as a thoughtful progressive universalist? You just end up having a debate about which virtues they would want. So it’s a very natural and comfortable… There’s not much to object to there. There aren’t devastating counter-examples to it. But that comes at the cost of not really giving you away to answer the question.

0:24:40.3 SC: Right.

0:24:41.9 JG: So that’s my general take on virtue ethics. When it comes to deontology, this is where I’ve really motivated a lot of my scientific research, and we could go on it for hours and hours here, but I’ll try to give the very short version. The idea of deontology is that there are certain actions that are, or types of actions that are inherently right or inherently wrong, that which must be done, their duties, that which must not be done because they’re wrong or they violate people’s rights, to put it in more modern problems. And my view of deontology is that it is, as a philosophy, is that it is actually a rationalization of our moral emotions. And so this comes out most clearly in moral dilemmas that I spent a lot of time discussing that fall under the heading of the trolley problem. And these days the trolley problem has become a meme where it’s more of a platform for saying, “Well, would you say if you’re a high school teacher, if it meant killing five dogs or something… ” And that’s… To me, that’s not the original interest of the trolley problem, just as a platform for trade-offs.

0:26:00.5 JG: The interesting thing is contrasting different cases. So if you ask people, “Is it okay to hit a switch to turn the trolley away from five people and on to one?” most people say that that’s okay. If you ask people, “Is it okay to push somebody off of a footbridge so that they land on the tracks, they get killed, but it will stop the trolley from killing five people?” is that okay and most people say “No.” And what we’ve learned from a lot of research over the last 20 or so years is that when people say no there, it’s largely being driven by an emotional response, and it’s an emotional response that depends a lot on the physics and mechanics of that action, that because when you push somebody off the footbridge, because it’s active rather than passive, that is an important part because you’re harming the person as a means, it’s part of your goal, as opposed to as a side effect, so you’re not just turning the trolley and then it happens that this other person is killed as collateral damage, that’s important, and because it’s direct.

0:27:00.6 JG: If you push somebody, that actually feels worse than if you hit a switch that would drop them through a trap door onto the tracks. And these different features combine and make us have a negative emotional response, and there’s now good evidence that this comes from the basic mechanisms of how we learn, and a particular reinforcement learning of the kind that is used in deep reinforcement learning in neural networks of the kind that we see operating in animals such as rats and monkeys and humans. And basically, what’s going on there is we learn probably in toddler-hood that it’s not okay to do things like pushing people, [chuckle] and so we have negative reactions attached to those things. So that’s empirically what’s going on in the footbridge case.

0:27:44.2 SC: Now, philosophically, what’s going on there? Well, a lot of philosophers look at that and say, “A-ha, this explains exactly what’s wrong with utilitarianism. Sure, you could save more lives, but you’re violating somebody’s rights when you push them off the footbridge. You’re using them as a means, and so on and so forth, right? Well, why aren’t you violating somebody’s rights when you’ve turn the trolley away from the five and on to the one?” I think that what’s going on with deontology, and particularly and can’t is it is an attempt to put a rational gloss on our feelings. And it’s not that those feelings are bad, but those feelings are not infallible. Those feelings are… Those feelings are adapted to certain kinds of environments and questions, but don’t necessarily generalize well to other environments and questions. And this is most clear when we look at the kinds of effects on our feelings that are very hard to endorse, like it’s morally acceptable to trade one life for five, if you end up doing the sacrificing of the one by pushing that person rather than by hitting a switch, right? No one thinks that’s morally important, but our feelings think, so to speak, that it’s morally important.

0:29:04.2 JG: And this isn’t just about… This isn’t just about trolley cases. So someone recently asked me about what’s going on at Facebook and their failure to rein in misinformation and planning of violent activities leading up to January 6th. And that’s very much a real world thing. Why did Facebook not act on that? Well, I would suggest, and this is, I should say, this is… A lot of this really important research was done by my colleague, Fiery Cushman, so credit him with that. If Mark Zuckerberg had to storm the capital himself, or stand there in front of a right-wing, a group of right-wing militants and hand them guns or ties to tie up Nancy Pelosi and Mike Pence, he wouldn’t do that, I don’t think. Whatever limitations he may have as a moral person, I don’t think he would do that, right? That is active, direct, pretty direct and intentional. But instead, the problem that Facebook faced was preventing harm where their role is passive and they’re…

0:30:14.4 JG: And it’s not very direct. And they’re not trying to cause these problems. It’s a side effect of their running their businesses as profitably as possible. And so these things that we’ve learned… So this is a case where… So in the footbridge case, I think we’re over-reacting, that is, we have more of an emotional response because we’ve taken this paradigmatically bad action, pushing somebody, something you learned not to do as a toddler, and then artificially attaching it to the greater good, and now it becomes an objection to utilitarianism.

0:30:50.5 JG: In the case of Facebook, it’s the opposite. We have this enormously consequential action that, or set of behaviors or choices that’s eroding American democracy, but it doesn’t feel like pushing somebody off of a footbridge, because he doesn’t have those basic physical qualities. So you asked me about deontology, I think the problem with… I think that as a philosophy, what it’s essentially doing is just trying to justify whatever we feel. And that’s not bad for every day life if it stops people from lying and stealing and cheating, but if we’ve got difficult moral problems to solve, just rationalizing our intuitions isn’t gonna help because people have different intuitions about what’s right or what’s wrong, and different groups have different interests. Now, look, if the deontologists ever manage to hit the bull’s eye, if they could deduce from self-evident first principles what’s right and what’s wrong, then I would pack up and I would say, “Okay, you’re right. You’ve proved it.”

0:31:53.1 JG: But that hasn’t happened. And I don’t think that’s really what it’s about. I think it’s about trying to tell a rational story about feelings, and that we’re better off taking our feelings for what they’re worth, but not… But being able to transcend them when necessary. And I think that that’s what deep pragmatism does.

0:32:09.1 SC: Well, this leads beautifully into the more empirical side of things, ’cause you’ve already alluded to it, but this idea that there’s sort of a… I think probably most people in their informal morality have deontological aspects and consequentialist aspects, and probably some virtue ethics aspects in there, but you and your collaborators have been able to say a little bit more about literally what is going on inside the brain when we have these competing impulses. So say more about that.

0:32:39.1 JG: Yeah, so take our responses to cases like the footbridge case. In one experiment that we did, we gave people cases like that, and we asked people to say, “Don’t tell us what’s right or wrong here,” this is the work done with Amitai Shenhav, who’s now a professor at Brown. He said, “Don’t tell us what’s right or wrong, just tell us how bad do you feel about this action,” and the ratings of how bad they feel about it correlate with activity and a part of the brain called the amygdala. And the amygdala is part of a general evaluation system that is especially active when it comes to making a sort of rapid assessment of something. So if you’re walking along and you see a stick but it sort of looks like a snake, and you kind of have that feeling where you sort of shudder before you even realize what you’re looking at, that is your amygdala detecting or responding to the pattern that’s been detected and saying “Hey, pay attention to this. This could be bad.”

0:33:40.2 JG: And your amygdala a role in reinforcement learning. If you are learning to associate an electric shock with a blue square, when the blue square comes on, your amygdala is gonna respond before the shock comes. So this is part of our basic learning mechanisms here. And we have good reason to think that… And this is again, Fiery Cushman’s work, that it’s that basic learning mechanism that’s responsible for that emotional response. Some really lovely studies done by Fiery and Indra Patel, had people do two different kinds of tasks, so one is this task where people are just playing a game where they kind of make choices, where they take you essentially to different rooms, and then one room can lead to another room, and you can get to a reward… And the levers that you have that take you from room to room, they have different probabilities of taking you to different places.

0:34:39.3 JG: So one way you can end up getting a reward is by getting lucky, that is if you hit a couple of switches that, with a low probability, get you to what you want. And if you understand the map, you might say, “Oh, there’s a reward there,” but the more reliable way to get there is with a different route. Right?

0:34:56.9 SC: Right.

0:34:57.6 JG: So in one case, you’re using a map, you’re using a model. This is called model-based reasoning. In another case, you’ll just be relying on your habits. You’re saying, “Well, when I hit this lever and then hit this other lever, I got the cheese, I got something good. So I’m gonna keep doing that.” And you can look at the extent to which people in this game has nothing to do with morality, rely on their trained habits versus rely on their knowledge of the map. And it turns out that if you’re more math-like, when you play this multi-stage movement, navigation game, you are more likely to have a utilitarian pattern in your moral judgment. So it really is getting at this basic mechanism where you can see these mechanisms operating in humans navigating mazes and in rats navigating mazes, and it’s the same neurobiology, like it’s a really pretty, beautifully tight connection there. Now that’s… So where does the feeling come from and where do we see it in the brain? Now, if I give you the footbridge case and I say, “Okay, don’t tell me what’s right or wrong, just tell me which action will produce better results,” then you’ll see more activity in a part of the brain called the dorsolateral prefrontal cortex, which is responsible for conscious explicit reasoning and also in different ways, cognitive control, implementing a rule or a policy and overriding competing influences.

0:36:20.9 JG: If I say, “Okay, now I want you to make an all-things-considered judgment, taking into account everything about the footbridge case,” you’ll see more activity in a part of the brain called the ventromedial prefrontal cortex, which is the part that was famously damaged in Phineas Gage, if that’s a familiar reference point. This was the 19th century railroad foreman who got a spike through his forehead, and as a result, had his moral character dramatically changed, even though his ability to use language and reason was preserved. And what the VMPFC seems to do is serve as a hub for signals that bear on decision-making, that that’s where all of those signals come together. And so you’ve got the amygdala saying, “Ah, don’t do that,” and you’ve got your dorsolateral prefrontal cortex saying, “Well, but isn’t saving five lives better than one and they kinda do get out there, right?” And that’s the basic picture of what’s going on. There have been nice studies showing how you can move these judgments around pharmacologically, other studies showing how different kinds of brain damage lead to different patterns of judgment.

0:37:26.2 JG: So if you have damage to the ventromedial prefrontal cortex, you’re far more likely to say that it’s okay to push the person off of the footbridge because the amygdala signal can’t get out into decision-making, but if you have other kinds of damage, then you can have other effects. So we can relate these things pretty clearly to different circuits in the brain that are not specific to morality and that play similar roles in other domains for humans and in those same other domains for other mammals, at least.

0:37:58.7 SC: Yeah, there’s an obvious analogy connection or even equivalence here to Daniel Kahneman’s famous Thinking Fast and Slow Dichotomy, where you have system one underneath the surface, is that right, system one is the unconscious underneath surface stuff, did I get it right?

0:38:13.2 JG: Yes, yes. Yeah, yeah.

0:38:14.6 SC: And System Two is the more cognitive rational deliberative part. So there’s morality fast and slow, right?

0:38:21.2 JG: That is, yes, and that is… In fact, that is the title of one of the subdivisions of my book, “Morality, fast and slow.”

0:38:26.1 SC: I didn’t make it up. Yeah.

0:38:29.5 JG: So, sorry, I should have mentioned that as a reference point, but absolutely. And what Fiery has argued is that this distinction between model-free learning and decision-making and model-based learning and decision-making, which has been really important to Computer Science and Artificial Intelligence, and has also been important in dissecting neural circuits in humans and other animals that that is really the essence of system one, system two, or as I’ve called it, dual process psychology, that it’s about two different ways of attaching values to actions, one by attaching values directly to actions and the other by attaching values to actions via an understanding of the causal connections with consequences that you care about. A critical point in here is that this applies not only to actions out in the world, but to mental actions, that when you engage in reasoning or thought, the patterns that you follow are patterns that have been reinforced, that you have habits of thought in the same way that you have habits of action, and that explains things like why we get certain tricky math problems wrong. This is why I’m thinking of the cognitive reflection test, if that’s familiar, that is, we have certain patterns of thinking reinforced, that get us to miss the boat.

0:39:52.8 SC: And let me just push back a little bit on… Not even push back, but ask about how clean this distinction is a little bit in the morality Fast and Slow case, because our buddy David Hume explained to us that reasons are the slave of the passions. And even in this more deliberative cognitive way of being moral, would you say sort of affiliates with utilitarian impulses, there still is some judgment about what is the greatest good where that you can be instrumental, but you need to have a goal from somewhere. So is it more like teamwork or is it really two alternatives?

0:40:32.7 JG: There are two different algorithms, but both algorithms involve some kind of evaluation, as you said, so you can do things in a more model-based or a model-free way and more system one or system two way, but there’s… But some evaluation, as you said, and I think you hit the nail on the head, is that in model-based reasoning system two you have to be attaching some values to the outcomes. And so that is the fundamental distinction, it’s not… Is emotion involved or not, but rather it’s are you reacting emotionally to a particular kind of action in a particular context or a particular category of action, or are you reacting to the value of the ultimate goal or goals that are to be gained or sacrificed? And I think that this was appreciated early on by the utilitarians, by Henry Sidgwick, who is the least popularly well known, but the most systematic of the three originals, Bentham, Mill and Sidgwick, really laid this out nicely. He distinguished between three different levels of intuition and what he called perceptual dogmatic and philosophical.

0:41:54.9 JG: And perceptual intuition is something like you see somebody do something and you just have a sense that it’s wrong, or you imagine some particular action, and you have a sense that it is wrong. Dogmatic or what you could in a less loaded way, you could call it categorical is a reaction to a certain category of action. “Oh, that’s a lie, therefore, it’s wrong.” Whereas, if you categorize it as strategic misdirection or something like that, and then you say, “Oh, that’s not so bad.”

0:42:25.3 SC: Okay, branding.

0:42:26.0 JG: And then the a philosophical intuition is really about attaching value to consequences, so the philosophical intuition is, pain is bad, suffering is bad, happiness is good. And that is not about a particular case, it’s not about a particular action, it’s a judgment about ultimate value, and so I agree with what you said that it’s not that, it’s emotion here and reason there, it’s about different relationships between emotion and reason. Are you using your reasoning in the service of an end that you have ultimately on some affective basis decided is good or bad, or are you… Or are you relying on your emotion to surpass judgment on a specific action independent of its consequences, and that can be useful and efficient in certain ways, there are certain things we don’t… We don’t want you thinking about whether or not all things are on balance, it’s better or worse if you shop lift. Well, the CVS really need the money more than me. In some ways you want to have a feelings that just say, “No, you can’t do that. That’s the kind of thing, we don’t do.” And the utilitarian agree with that because the world is better off if we have those feelings, but when it comes to our ultimate values, I think it makes me more sense to think about consequences.

0:43:46.1 SC: Okay. But yeah, the CVS example is a really good one. It clarifies something that you said earlier about the pragmatic aspect of de-pragmatism here, because it’s almost like you’re saying that if you’re a super conscious utilitarian, you become de-ontological in some ways because the way to get the greatest good for everybody is if everyone acts well and obeys some sort of social rules. Is there some future reconciliation here?

0:44:16.2 JG: Right. So the reconciliation is this, when it’s life within the tribe, when it’s basic everyday interactions, when what’s mostly at stake is my benefit or yours, do I get the money or do you? Do I get the money or do CVS? If it’s me versus us, that’s when you mostly think fast, you wanna trust your gut reactions when it’s about serving yourself versus following the rules or serving others, but the meta-moral problem is when we don’t wanna trust our gut reactions because my tribe has its feelings about what’s right and what’s wrong, but so does your tribe. Right, and if we all just rely on our feelings about what’s right or what’s wrong, then we all… It’s nuclear war, it’s we will all perish. So our feelings are pretty good for basic morality, but they’re really bad for complicated high-level inter-tribal morality, and that’s when we need to think slow, and that’s really the… When you ask me to nutshell, my book, moral tribes, that’s the nutshell, for practical purposes, is that in every day life, when it comes to basic matters of right and wrong, being a moral person, follow your intuitions and don’t overthink it. But when it comes to the moral issues that divide us, we need to think more and we need to step back from our moral intuitions, because it’s our more intuitions that have us at each other’s throats in the first place.

0:45:41.8 SC: I think maybe what I’m saying is something a little bit different because I think very plausibly started by drawing a connection between de-ontology and our feelings as you put it, our quick moral reactions, but what I’m asking you is, and I’m literally just making this up right now, so I’m sure it’s nonsense, but what I’m asking is, given that we want to be pragmatic, utilitarians, if that’s what we want to be, isn’t it plausible or at least conceivable that the way that plays out in the real world, by thinking really, really cognitively about how to make things best for everybody, is to make up a set of rules that people should follow, even if those rules don’t map on to directly our feelings, but rather than telling everyone to do a calculus to maximize utility, just say, “Here are the rules for right and wrong.”

0:46:31.5 JG: Yeah, that’s right. And this has actually been a persistent and persistently ignored a theme from John Stuart Mill on that Mill says, “Look, our virtues have value and we should cultivate them,” and it’s only at these sort of higher levels that we need to be in the explicitly utilitarian mode and it would be un-utilitarian to do otherwise, because things go worse when we ignore the basic everyday rules. But then there are times when those rules maybe need to be reconsidered, so you can have every day rules about gender roles, where the everyday rules. If you’ve watched, if you’ve seen a Fiddler on the Roof for her tradition and who does this, and who does that? And you don’t have to… You don’t need a musical to have to tell you about it. Those are aspects of everyday life that maybe we wanna question and reconsider. And there are other domains as well, what we choose to eat for food, for example, is something where if we just rely on our intuitions, of course it’s fine to eat a hamburger, even if it’s coming from meat, from animals that were tortured on their way to production and that are worsening our climate crisis, we need to be willing to rethink some everyday things as well.

0:47:58.8 SC: Yeah, this is extremely helpful. You’re re-invigorating my youthful enthusiasm for utilitarianism that I was back-sliding from for a little while, but… And it also leads directly into what I wanted to talk about next, and again, which you have already alluded to, which is that this sort of switch of moral thinking when we start thinking about these global problems, these inter-tribal problems, maybe the place to start is with the tragedy of the commons, ’cause I know that you’ve talked about that a lot and how that plays into our moral ways of thinking.

0:48:32.6 SC: Yeah, so this comes back to something I said earlier that is the tragedy of the commons, is the story comes from Garrett Hardin, who in the late ’60s was worried about over-population, and he gave the analogy of herders with too many animals on their fields, and then all the grass is gone and all the animals end up dying, and we need to restrict the size of our herds or else everybody’s animals are gonna die. And he was imagining that humans were going to over-populate and destroy their resources, but it’s a nice metaphor for the general me versus us problem. And as I said, we have… Our behavior on the commons is governed by basic feelings, that is, if I care about my fellow herders, then I’m not going to sneak in some extra sheep and take more of the grass, and I’ll be angry at other herders if they do that, and I’ll be grateful to other herders who restrict their herd for the common good, and I’ll feel bad about myself if I even contemplate doing this, so we have these positive and negative feelings that we apply to ourselves and to others that get us to be able to live sustainably on the commons.

0:49:49.6 JG: Now, we face these global commons problems related to things like climate change, and it’s complicated because it’s not just a bunch of individuals with personal feelings for each others, instead it’s groups with norms and philosophies and with hierarchies, and with some individuals in the hierarchy, maybe they can benefit from by saying To hell with the planet, because if you make your livelihood selling products that produce a lot of carbon, you might just say, “You know, I can move to wherever if things get a little too hot where I am, but I sure I’m racking it in now, selling petroleum products, and then those people can try to influence people and make them think that it’s a terrible violation of your freedom and your dignity and your autonomy, if you’re asked to make any kind of sacrifices for the sake of the planet or even to vote for a legislation that would rain some of these things in at a… Not at the level of consumers, but at the level of efficiency standards for cars and investing in green energy and stuff like that.

0:51:00.3 JG: So the basic feelings that we have as sort of herders on the common pasture in everyday life, so to speak, they get… They don’t transfer up to the larger problem, and that’s where we need to understand the problem, and make choices that are gonna produce good consequences instead of good feelings.

0:51:21.2 SC: Right, I think the way that you put it, which was very vivid and helpful, was that we climb the evolutionary ladder and then we kick it away, right?

0:51:27.8 JG: Yeah. Yeah, and that’s…

0:51:29.8 SC: When we start from some point that is ingrained into us in a very basic way, and we can sort of take in our values from that process without necessarily giving into the first impulses that that process leaves us with.

0:51:45.2 JG: That’s right? Yeah, and well said. I’ll leave it at that.

0:51:47.5 SC: [chuckle] Well, I’m quoting you. But what it means is… Well, so I guess, I should asking you. What does it mean when push comes to shove? You did this, you had this nice data from experiments where you’ve asked people to play the public good game, right, and they’re asked to sort of give money away. I was surprised both by the average results and by the differences in results under tine pressure and not.

0:52:14.3 JG: Yeah, so in that original study, this was worked on with Dave Rand and Martin Novak. We found that putting people under time pressure made them more likely to do the us-ish thing, to contribute to a public good rather than doing the selfish thing. It turns out it’s complicated, there’s been a lot of research on this using a lot of different methods, and it seems… The short and long of it, from my understanding, I haven’t followed the whole sort of almost decade of literature that’s followed this, is that it really depends a lot on your cultural experience, so if your dominant response is to be cooperative because you’ve lived in a world that rewards cooperative-ness with strangers, not just with your friends and family, then that’s going to be your dominant response, but if you’ve lived in a world in which you really can’t trust strangers very much, and where your response…

0:53:14.4 JG: When you’re put in this weird situation of cooperating with a bunch of people you don’t know in this sort of sterilized anonymous context, then it can even go the opposite way, and I think this was nicely sort of foreshadowed by an amazing study done by Benedict Herman, this was published back in 2008, where we had people do public goods games in different cities around the world. And in very cosmopolitan places like Denmark and Switzerland, and Boston, you saw people cooperating right out of the gate, and when they were given the opportunity to punish people who didn’t cooperate or to punish other people, but… And when I say punish, I mean like you pay a dollar to take $3 away from somebody else.

0:54:03.1 SC: Not corporal punishment, they’re not whipping each other.

0:54:06.5 JG: Yeah. Cooperation was sustained and in other places, cooperation started out in the middle, and then the co-operators punished the non-co-operators and cooperation went up, but then in other places where the social world tends to be smaller, so this would be places like Athens at least at the time, or Oman or Riyadh. People who didn’t cooperate punished the co-operators. And there was a kind of it is known as anti-social punishment, and it was a kind of resentment of coercion than it was, Look, I don’t know you, I don’t know who you are. I don’t like this whole game where I’m supposed to put my money on the line and trust you, and I don’t like the way that you’re trying to make me do this. Right. And so, I think that what that reflects is a kind of smaller world. So we have social heuristics.

0:55:00.1 JG: We have emotional tendencies that we’ve learned about when it’s safe to trust people and when it’s not, and when we should resist social structures that try to get us to trust more than we should. And then of course, the question is, well, if we have people who are more globally trusting and people who aren’t all trying to live in a world that requires some kind of global cooperation, how do we fix that? And I don’t think that the answer is just to say, “Hey, Riyadh, Athens, be more trusting,” the trust… Trust has to be earned. And so what we really need to do is build the kind of structures, social structures that make trust and global cooperation feel like the natural thing to do, and this is what a lot of what I’m working on and thinking about these days.

0:56:02.3 SC: Yeah, this is a huge set of issues that I do wanna get into, but there’s one thing I do wanna… I wanna go back very quickly, because in the public goods game where very roughly you’re just giving people the opportunity to share their money a little bit, it’s not a very complicated game. And there is this result that when you ask them to hurry, they’re more generous, and when you ask them to think about it, they’re like, “Oh, I’m gonna keep the money.” Now, that seems to be a little bit intention with the idea that we’re going to become a more successful global inter-tribal society by engaging our rational thinking faculties rather than just acting in the moment. How do you reconcile those two things?

0:56:44.0 JG: So the Public Goods game is a me versus us problem, it’s a basic problem of the tribe, and that is exactly when I think we need to be at least more often relying on our gut reactions, we need to be filled with a sense of trust and generosity towards the people who we interact with in daily life, assuming that they will reply in kind when it comes to national and global politics and the things that divide us, that’s where we can’t rely on our intuitions because it’s us versus them. So I think the intuitive success in the prisoner’s dilemma and in the public goods game illustrates the first part of the overall larger equation, so to speak, in moral tribes, which is, when it’s me versus us, think fast, you cultivate those pro-social intuitions, but when it’s us versus them…

0:57:42.7 SC: Got it.

0:57:44.1 JG: With different versions of pro-sociality competing against each other, that’s when we have to step back and think in a more reasoned kind of way.

0:57:53.0 SC: And maybe you can say a little bit about the origin of this conceptualization of the world in terms of us versus them, this is a strong legacy of our evolutionary history, that there is a group that we’re a member of and we fight for the group, this is like one of our deepest impulses as far as I can tell.

0:58:15.0 JG: Yeah, yeah, and this… We share this with our primate ancestors. If you look at chimpanzees, they’ll patrol their border, and if a group of chimps on patrol find a chimp from another group, a male chimp from another group that is defenseless, they’ll just kill that chimp. And with humans, it seems to be a very basic response. Katherine Kinzler did some really nice research where she showed that very young infants recognize a native versus an unfamiliar accent of someone who’s even speaking a language that they understand, and they’re more likely to accept a gift from someone who speaks, say English in their native English accent versus someone who speaks English with a French accent, and you can go the other way, it’s not just English and French, there are other languages, of course, that we come into the world, it seems ready to divide the world into us and them. But there’s some flexibility there that we don’t… We don’t just detect who shares our genes or something like that, and only trust those people who are in grouped in that way. We rely on cultural markers to tell us, “Okay, you’re a Christian, I can trust you, or you speak my language so I can trust you.” And because we have a cultural flexibility, there’s no reason in principle why we all can’t belong to the same tribe.

0:59:49.9 JG: There are always going to be forces pushing against that because someone can say, “You know, rather than being third in command for the big tribe, I would be better off being king of a smaller tribe.” And so the possibility of human hierarchy is always creating incentives to defect at the cultural level, and I think that that’s really what’s going on with ethnic nationalism in the United States and other countries, is that the world has been moving towards a more and more global values, but there are winners and losers in that process, and someone like Trump or Le Pen in France, or Nigel Farage in England, and so on and so forth, they’re saying, “No, America First, France is for the French, we don’t wanna be part of this. Just us, just those of us who really got that us-sy feeling.” And they play to that and then they say, “And those other people, they’re coming to kill you and rape you or your sisters and daughters, and they’re stealing your jobs, those people in China are stealing your jobs, and they… And why would they do that?” Well, it’s hard to be king of the world, but maybe you can be president if you push those buttons hard enough, and so the great challenge is to…

1:01:33.8 JG: We have this open-ended flexibility to see us as very small or very large, and I think the challenge we’re facing right now, geo-politically, is between forming a truly larger us and or breaking off into our smaller, more comfortable us-es.

1:02:00.8 SC: It’s a challenging part, I just read, there’s a column in the New York Times by Tom Hiddleston very recently, quoting some academic paper that they claimed that one of the reasons why authoritarianism can be so attractive is just ’cause it gives meaning to people’s lives, it gives them a purpose, and in a world where the world is so big, you don’t meet everybody, and it can seem like there are invisible forces keeping you down, then it is… It is a very attractive position to just circle the wagons a little bit. Right.

1:02:33.6 JG: Yeah, absolutely. And I think that’s especially true if people feel like they’re sliding, that there’s been this big debate in the social sciences, is the rise of Trump and ethnic nationalism, is it about economics, or is it about tribalism? Is it about hierarchy and us versus them? And I think it has to be both, the data point more strongly towards tribalism and us versus them, because individual variation and economic situation is not a very good predictor of… It’s not about my pocket book and people don’t attribute their personal gain or loss of income to what’s going on geo-politically. But over the last 40 years, inequality in places like the United States has gone up and up and up, and the prospects of someone who doesn’t have a college degree have slowly slid down and that is Trump’s base, and it’s basically people without college degrees. And that sense of being left behind, I think it doesn’t require you to go down that nationalist route, you can go a more Bernie kind of direction and try to push for a more egalitarian social structure. But for many people, the most comfortable and appealing route is to say enough of this, I want a country that’s just for people like me.

1:04:05.4 JG: People who look, and sound like me. And then that can really be reinforced by these Wolf stories about the outsiders who are coming to take your jobs and physically harm you, and then those elites who are in cahoots with them and who are just profiting off of your demise, and then those stories really reinforce that sense of victimization and it makes that… Yeah, let’s just be us feeling very powerful.

1:04:44.5 SC: Well, it goes back in my mind to one of the things you said about utilitarianism right from the start, we very naturally our intuitions, what we grow up thinking in terms of morality and correct behavior, give a lot more weight to people close to us and like us, than to people far away, and actually, one of the reasons why… One of my worries about utilitarianism is it has this unrealistic vent sometimes where it says every person’s experience counts precisely equally, and even if that’s true and it’s sort of God’s eye view sense, it’s almost impossible to imagine real human beings acting that way. Right?

1:05:22.8 JG: Right. Well, so this is the pragmatist part, coming back again, is that if you’re a good utilitarian, you start with humans as they are and not demand that they live up to some impossible ideal. So my view is, we’ve made a lot of progress towards that, people care a lot more about people on the other side of the world than they ever used to. Right. And people… People’s moral circles are much wider than they used to be, and the idea of at least in the West, not eating meat, which is delicious, because you just care about animal suffering. That would just be absurd. So many of the things that we consider perfectly normal and reasonable now would have just seemed as absurdly idealistic for most of human history, but you’re absolutely right that there’s no prospect any time soon of people caring more about strangers than they care about their own friends and family, and maybe in the end, we don’t need to go that far in order to have a well-functioning global society, we just need to not burn ourselves up with carbon in the atmosphere or nuclear weapons, or a supercharged pandemic. So the way I see it is, you don’t have to give up on loving your children, [chuckle] just give up 10% of your income or 1% of your income and devote it to good causes and support political movements that are more egalitarian rather than less egalitarian. And that promote global cooperation rather than undermining it.

1:07:04.5 JG: My friend Charlie Bresler, who runs an organization called The Life You Can Save, which was started by Peter Singer, ultimately, about alleviating global poverty is… He has a great phrase that he brings into this, which is personal best, that is don’t… It’s not helpful to think of this in terms of trying to be perfect, in terms of valuing all lives perfectly equally, of caring about strangers no more than or as much as you care about yourself or your friends and family, instead, just ask yourself, “Can I be a little less selfish? Can I be a little less tribalistic?” Right, and if that becomes easy, then you dial it up, right, and so with anything else, whether you’re on a diet or you’re trying to get in shape or learning to meditate or whatever it is, you don’t start off by saying, “Okay, I’m going to be an Olympic athlete tomorrow, or I will be very disappointed if I’m not.” Instead you say, “Okay, can I run one mile without stopping,” and then once that becomes easy, you build up.

1:08:15.2 JG: And so I think Aristotle got one thing right, is that these things don’t just happen because we’ve heard some theory, you need practice, you need to build up, and I think as individuals and as cultures, we can scaffold and we can build up and make ourselves more globally pro-social without breaking the moral bank, so to speak.

1:08:34.5 SC: Let me just throw out a thought that I have no idea whether it’s relevant here, but I recently did a podcast with Christopher Mims, who is a journalist of Wall Street Journal about industrial ecology and how stuff gets to us in the modern world. How the fun anecdote is, if you catch a fish off the coast of Scotland and you eat it from the supermarket in Scotland, chances are after being caught, it was sent to China to be filleted and then sent back. ’cause it’s just so much easier and cheaper to do that than to fillet it in Scotland. And so my point is that even though our sort of circles of caring have grown as we have become a more interconnected world, we’re also so differentiated in what we do, that the things that are right next to us, or the ways of living that are right next to us can become a little bit invisible, we’re not… I’m not aware until talking with Chris and reading the book about the truck drivers who are bringing me this stuff or I could go with my life, not really connecting in some deep way to the people who serve me food, which maybe 200 years ago would have not been impossible. Is this a new challenge for global morality that we’re so complex and interconnected but also differentiated?

1:09:54.3 JG: Yeah, I think that, yeah, as technology has become more complicated and the economy has become more complicated, the effects of our choices as agents within the global economy are much more opaque, and sometimes it’s just interesting to think of how many different nations participated in making it possible for me to have an iPhone in my pocket or something like that, and sometimes it really matters, again, to come back to a familiar example, I think very few people would eat factory farm meet if they spent much time in a factory farm, but we’re just separated from it… And that’s why the companies that do factory farming work very, very, very, very hard to keep footage of what’s going on in there out, to us it just tastes like a chicken sandwich. And of course, that’s one example, but labor practices and reinforcing certain kind of corporate structures. Every time you buy something from Amazon, you’re reinforcing a kind of economic structure, and I still buy things from Amazon, but I have concerns about that.

1:11:26.6 SC: I feel bad about it.

[chuckle]

1:11:27.6 JG: Sometimes I just want my big box of toilet paper, and I don’t wanna have to organize my life around this, right? So yes, I agree that the world is so complicated that it’s almost… It’s impossible for us to keep track of all the pieces, and yet we can’t just… It will be terrible mistake to just throw up our hands and say, Oh, it’s inscrutable. So I’m just gonna do what I feel like it. So we have to find that, find some middle ground where we pick our battles and we learn what we need to learn most and make good enough choices.

1:12:04.0 SC: Well, on the more optimistic side, you do… I forget whether it was your research or you were quoting somebody else, but you do have research where when you make people who are nominally in different tribes work together on some task and cooperate, they become much more friendly and trusting of each other. Is that an over-generalization or is that okay?

1:12:22.7 JG: Yeah, this is actually work that’s ongoing. First of all, this is an old idea, going back to Gordon Allport in his book, The Nature of Prejudice, where he emphasized that contacts between groups, he was mostly focused on race in a cooperative context can really bring people together, and he gave the example of something that was very salient at the time was the integration of the US military, and we say There are a lot of things wrong historically and today with the US military and other militaries, and the integration of the US military didn’t just happen for noble reasons that the United States and many people were happy to have Black people risking their lives for the United States instead of themselves or their children, but whatever complex and questionable motives might have been behind creating a more integrated military, it did create a more integrated military and a more integrated culture. I now, I’m doing research on this with a brilliant student named Evandee Philippus, who’s in the organizational behavior program at Harvard business school and in the psychology department. And we have Republicans and Democrats working together as partners in an online quiz game that we’ve created, and I can’t say anything about the results here, other than I’ll say that I’m very excited about this research.

1:13:52.3 SC: Okay, good.

1:13:52.9 JG: And I think that the principles behind it could really be helpful.

1:13:58.2 SC: Well, good, and maybe that’s the right segue into wrapping up with more programmatic pointers for ourselves and our listeners because you’ve actually, as far as I can tell, taken these thoughts about being a good person and in global meta-morality etcetera, and thought about how to make them make a difference in our choices in our everyday lives in our giving to charities and things like that.

1:14:26.2 JG: Yeah. So yeah, since publishing moral tribes and doing a lot of this work on very abstract and theoretical issues, I kind of I want it as a moral psychologist and student of social behavior to work on things, to sort of apply my philosophy rather than defend it and understand the mechanics behind it, although I’m interested in that as well. And one of the things that’s been most important to me personally and philosophically is living utilitarianism, living deep pragmatism and doing what you can to do as much good as possible, and the most straightforward way than individuals who are lucky enough to have some resources… The most straightforward way that people can do good is by supporting charities that are highly effective. So most people don’t know this, but the most effective charities are orders of magnitude are more effective than typical charities, so to give an example that comes from Toby Ord. You can help a blind person in the United States by paying for the training of a guide dog, and that costs about $50,000, and that’s a really wonderful valuable thing to do if you can spend $50,000 on a car for yourself for an addition to your house or something like that, you do much more good by helping a blind person get around in the world. But you can fund in other nations a surgery to prevent an infection from Trachoma, it can be done for about $100. You can prevent somebody from going blind for about $100.

1:16:08.0 JG: So we’re talking about something like a thousand-fold difference in effectiveness. And it’s not to say that the person in the United States who’s blind, it’s not important that we shouldn’t help them, but if you have to choose between helping a 1000 people not go blind or one person manage their blindness, to me, there’s no question about what we should do. And there are many things that are as effective as Trachoma surgery, so things like distributing insecticidal malaria nets for a very low cost can prevent people from dying of malaria, or deworming treatments, where a simple pill can destroy parasitic infections that are really debilitating and prevent children from going to school and getting an education, and those deworming treatments can cost less than $1 and it’ll make a huge difference in someone’s life. So the charities that support these things are enormously impactful. It’s kind of mind-blowing how much good they can do for so little money, right? And so then… And I try over the years…

1:17:05.5 SC: Sorry, do you know… Do you know, by the way, GiveWell?

1:17:08.9 JG: Yes, so actually… So GiveWell plays an important role…

1:17:12.0 SC: I just wanna mention very very quickly, GiveWell is a sponsor of the Mindscape Podcast.

1:17:17.6 JG: Oh great, okay. Yeah.

1:17:18.8 SC: I suspect… I’m recording this weeks ahead of time, of course, but I have a high probability that I will be giving an ad for GiveWell during… I’ve already given it by now, so I’m glad we’re on the right wavelength.

1:17:27.5 JG: Oh good, good, yes. So GiveWell was the organization that really pioneered doing this effectiveness research, and the thing I’m about to talk about giving multiplier is making use of their research. So what do you do? So I’ve wanted to… I’ve been studying the psychology of this for a while, how do you get people to give more, and how do you get people to give more effectively? And I initially tried trying to convince people, as a psych… Experimenting with convincing people the way I was convinced which is with philosophical arguments of the kind that Peter Singer gave, which is “There’s a child who’s drowning in the pond and you could save that child, but you’re gonna muddy up your clothes, it’ll cost you some money, but you save someone’s life. Is it worth doing that? And I said, “Of course.” And so I present people with this and say, “So now, are you willing to commit to supporting the most effective charities?” and they’re like “No,” or very small number of people will go that direction, but most people, not so much.

1:18:24.3 JG: And then with Lucius Caviola, who’s been a postdoc in my lab, we hit on a fundamentally different strategy which takes a much more deep pragmatist approach in retrospect, which is instead of saying, “Don’t use your money on yourself or on the charities that you love that really speak to your heart, don’t do this, instead do this other thing, supporting deworming treatments in Africa and Asia where you have no personal connection to this,” he just said, “What if you do both?” And so we started doing these experiments where we just said, “Hey, you have a certain amount of money, you could give it all to this charity that you chose, or all to this charity that experts recommend, or you can do a 50/50 split.” And we found that people love the idea of doing a 50/50 split. And that more money ends up going to the highly effective charity if you give people the 50/50 split option than if you don’t, even though only half of the money is going to that charity.

1:19:20.2 JG: And so we did some other experiments where we looked at trying to understand why this is the case, and it seems like it’s a kind of heart-head balance that people wanna give something to the charity that they love, but then once they’ve given something, they also really like the idea of being highly effective and competent and using the scientific research to do as much good as possible. And we found that people really like this, if we say, “Hey, if you make a 50/50 split charity where you pick one and we pick one, where you pick one of ours, and we’ll add 25% on top to both donations,” people go, “Oh wow, that’s great.” And we found not only that, we asked people, we said, “Hey, would you be willing to take some of what you gave and then use it… You can use it to pay matching funds for somebody else.”

1:20:11.7 JG: A lot of people were willing to do that. Not everybody, but a lot. And when we did that, when people put a dollar into matching and ends up moving more than a dollar towards highly effective charity. So we said, “My gosh, well, we should try this in the real world.” So we created this website… When I say we, I mean Lucius Caviola, my collaborator and his awesome web design friends created this website called Giving Multiplier, which works just like the way I described. So if you go to givingmultiplier.org, you’ll see an option first to pick any charity that you like, any registered charity in the United States, and you enter that in, and then it says, “Okay, so that’s your charity that you chose, your personal favorite,” and then we have a list of currently nine charities that cover a lot of different things, so some of it is global health and poverty, so fighting malaria, deworming treatments, vaccines, Trachoma surgeries and things like that.

1:21:12.2 JG: Some things are… We have a climate change charity that seems to be extremely effective. We also have a charity called GiveDirectly, which just gives money to people in poor nations, and research shows that they do an excellent job of using that money to build businesses and take care of basic necessities, and that it ends up helping the whole community. So this is a little bit… Some people like this ’cause it’s… You might see it as less paternalistic, that you’re not saying, “Here, have a malaria net,” even if that’s not what you would ask for. Instead it’s just saying, “Here are some resources that, I’m gonna trust you to do as much good for yourself as you can with it.” So we have a lot of different things, but all of these things have either been rigorously tested with experiments shown to be highly effective, or there are more long-term things like preventing the next pandemic where it’s reasonable to assume that a relatively modest investment in research can have a huge impact.

1:22:07.8 JG: So you pick your charity, you pick one of these charities that experts recommend, and then you say, “Okay, how much do I wanna give total?” and then we have this little slider where you can decide how much you wanna allocate to your personal favorite charity versus the charity from our list, and the only constraint there is you have to give at least 10% to one of the charities that we recommended. And then Giving Multiplier adds a match to your donation, and I think right now it’s a 50% match if you give everything, if you just make a donation to the highly effective charity of the one that you chose, and we do a 25% match if you split 50/50, and you can do things in-between.

1:22:57.7 JG: And this has been great so far. We launched this less than a year ago, and it’s raised over half a million dollars, most of that go into some of the world’s most effective charities. And we have created a special code that unlocks a higher matching rate for your listeners. So if you put in the code MINDSCAPE, all caps, all one word, then you get a higher matching rate than you otherwise would. So we encourage people to give this a try. And the idea here is that you don’t have to be a perfect utilitarian. It’s okay to care about… And I do this myself. My wife and I, we give to our local schools in the Greater Boston Food Bank and things like that, that we don’t give up on having these more personal connections. So you support something that’s personally meaningful to you, but you also give something to something that’s… The research says is super effective. And then we help out. And we also have a system where if you donate, you have the option to donate part of your donation to the matching fund, so you can pay it forward for somebody else, and a lot of our donors do this.

1:24:04.4 JG: And the whole cycle has just been self-sustaining. So it’s been this wonderful, virtuous circle of effective giving where people support things they personally care about, but also go with the research and it’s just been running on its own, and so we’re excited for our second holiday season coming up, and hopefully we’re gonna get over a million dollars and then some. So I hope your listeners will find it useful and gratifying.

1:24:32.6 SC: Yeah, no, it’s a great pitch, and I’m a big believer in both the head and the heart, so I think that’s a very good way of saying it. I’m always gonna give some money to the local pet shelter where we got our rescue kittens, even though I know that it’s not quite as impactful on the world as giving money to help disease or poverty elsewhere in the world, but I also try to do that. And likewise, you and I both like it when rich people give money, or universities to help us do research and support students and things like that. Also, probably not the most impactful, but I think that the idea that you do a little bit of both and everything gets better is a very clever one.

1:25:10.2 JG: Yeah, yeah. So yeah, thanks. I’m excited about this and looking forward to see you then.

1:25:17.3 SC: And it works whether or not you are utilitarian, which is the best thing about it, because our opinions change after that.

1:25:26.3 JG: This is… View this as a way to expand the circle of effective giving, and create a way for people to do a lot more good than they otherwise would, while still feeling connected to what they’re doing, and feeling connected to the other people who are part of this, that when you support the matching fund, you participate in this cycle.

1:25:55.5 SC: Good. Lots for the listeners to think about, feel and do also. So Joshua Greene, thanks so much for being on the Mindscape Podcast.

1:26:02.4 JG: Yeah. Thanks so much for having me, this has been great.[/accordion-item][/accordion]

14 thoughts on “176 | Joshua Greene on Morality, Psychology, and Trolley Problems”

  1. The advantage of deontology is that you don’t have to think ahead. Utilitarianism is uncertain. You have to foresee consequences…and how far into the future. So they aren’t really direct choices of value, the comparison depends on how well one can predict the future. JG sort addresses this by calling it deep pragmatism, but that just seems to be name and isn’t really filled out.
    Human experience sounds like the right “common currency”. But it’s not lack of suffering or well being. It’s whatever a person values. Yes, in general they don’t like suffering…but they may accept a lot of suffering for some abstract cause and it’s no one’s place to tell them they’re wrong. Having introduced it though there’s not really any discussion of how to use it in negotiating public policy. Maybe in his book?

  2. Fascinating topic and interview.

    There are many versions of the so-called “Trolley Problem”, a series of thought experiments in ethics and psychology, involving stylized ethical dilemmas of whether to sacrifice one person to save a larger group. Most of them are pretty far-fetched and unrealistic, but there have been real life situations in which those kinds of decisions actually had to be made. A prime example was in World War 2 when the decision had to be made on whether or not to drop 2 atomic bombs on Japan, with the goal of ending the war with as few casualties as possible. The article posted below ‘Can Nuclear War be morally justified?’ examines that decision asking the same kinds of question that came up in the Trolley Problem.

    https://www.bbc.com/future/article/20200804-can-nuclear-war-ever-be-morally-justified

  3. Pingback: Sean Carroll's Mindscape Podcast: Joshua Greene on Morality, Psychology, and Trolley Problems - 3 Quarks Daily

  4. Joshua Greene get’s a good deal right in his discussion of the origins of moral feelings. Morality arises from the need to survive in groups. Most everything we get in life comes from other people and if we don’t cooperate
    with others and go around breaking social rules everywhere by hitting people and stealing their stuff our social group will very quickly ostracize, imprison or eliminate us trough imprisonment or otherwise from participating in group benefits. So that’s how behavioral/moral rules arise. And if your group shares social or religious rules and rituals you had better cooperate and participate in those too or you may lose many friends and allies.
    Try living in an Islamic fundamentalist society without going to the mosque and paying lip service to the dominant religion. You won’t last long. So cooperation is a matter of survival and self-interest.

    This much Greene gets right. But his approach to Pragmatic utilitarianism doesn’t work at all and doesn’t solve basic conflicts in values and interests. And this is where Greene goes off the trolley tracks. If Putin wants to invade Ukraine it isn’t going to help to tell him and his Russian followers to just “be a little less tribal”. No, they want to take back Ukraine for as a vindication of the tribal moral glory of Russia and they are going to it by invasion or coup regardless of whether most Ukrainians, Nato members or other Europeans or Americans think it is right or wrong. Nationalist conflicts and most other conflicts cannot begin to be solved by telling the parties to just “reflect a bit more on being a little less tribal”. These are groups with opposing National goals and values and they want to subjugate their opponents to their own values. They don’t agree on any greater good.

    And that’s the whole problem with all forms of utilitarianism which is just as hopeless as deontology in solving moral problems. There is no universal greater good because everyone disagrees on what it would be. Putin doesn’t agree that avoiding suffering for his opponents is a greater good. In fact, he thinks their suffering is a great benefit to his own conception of how things can be. Eliminating suffering and making people have positive conscious experiences cannot by definition be a universal moral goal because that goal can never be universally shared or achieved. Just as people who like steak are not going to outlaw cattle farming because cows suffer, so Republicans don’t care that electing ideologues like Trump upsets Democrats. They have diametrically opposed tribal interests values and goals and Utilitarianism can do nothing to solve those conflicts.

  5. Brent, unless you believe that values are real objects the difference between using values and experience is not much different. If you are going to be a value realist, you’ve just about back-doored moral realism, haven’t you?

    Experience incorporates values. If you are running a marathon your body will hurt but if you “value” competing – or completing, winning, etc – you are experiencing fulfilment of your desires/preferences, or even experiencing the anticipation of fulfilment. That makes the total experience positive so you keep running, despite the pain.

  6. Its hard to listen: I set my teeth hearing a Darwinian base for morality.
    Darwin is not life. Natural selection is not living. Only scientists see through the reductive prism of a simple ‘survival of the fittest’ to define culture, morality, kindness, and reason. Every empirically based scientist, and that includes physicists that believe in unprovable Everett-ian multiverses, play up whit they believe to be a skeletal framework for all they see.
    This theory is akin to Freuds biological approach to libidinal drives. Thoroughly discredited as naive over a century later. Darwin is. tool as all statistical approaches to culture and characterization of species is a tool.
    Completely negates choice. Compassion. Our ability not to go extinct. Our chance at living an authentic life.
    I couldn’t listen beyond the first three minutes. Perhaps there is an abridgment further on?

  7. I got to wondering whether the ideas rebranding Utilitarianism as Deep pragmatism are inherent to utilitarian thought or are his interpretation of utilitarianism, somehow it doesn’t quite feel like utilitarianism. If they’re already in utilitarianism then the label “Utilitarian” is unfortunate, it clearly suggests there is an abstract utility that has to be maximized, which is where all the distortions of the theory come from (maybe this comes later imported from economics? I’ll have to check), Greene’s dismissal of extreme Utilitarian consequences using his pragmatic approach seems intuitionistic and could be an Achilles heel that ontological minded philosophers may stab at, I got the feeling that his arguments in favor of deep pragmatism can be formalized beyond this intuitionistic point, but being a scientist I assume he doesn’t want to take that step, which is debatable. Made me think of coining the term (don’t know if this has been proposed before) minimax morality: instead of maximizing compound utility, you try to maximize minimum well-being.

    On trolley problems, Greene argues against putting too much stock into our intuitions and gives an example where they fail. But I actually do think that a passive outcome by collateral damage is more acceptable than an active one that uses someone as a means to an end, even though the outcome may be exactly the same, and it may be enlightening to speculate why. If natural selection by evolution has conditioned us to react differently to these scenarios there has to be a rationale, which I speculate to be this: on any situation, real or fictitious, trolley problems included, I am usually exposed to partial information, the hidden information is just as relevant to the correct decision, and even though I don’t know it, I can characterize it probabilistically. I would wager that using someone as a means to an end, like pushing him onto the tracks, interferes with correct choices he made, such as not standing in the tracks, and is likely to sacrifice a fitter individual for survival than the ones on the track. But changing the direction of a train to affect fewer people as collateral damage, is a choice, all else being equal, between interchangeable individuals that have put themselves in similar peril. This is not airtight of course and doesn’t respond to all trolley problems, but that’s a general idea.

  8. The video posted below “The science of Tribalism| why we Hate” examines how genetics, family, friends, companions, political leaders, and the media all contribute to how we judge ourselves and others. It also takes a close look at how social media, like Facebook makes you seek comfort and security in your partisan identity, which means you’re more susceptible to the idea that other parties’ followers are not just bad people but a threat to you personally and that will justify extreme action and violence to keep them out of power.

    https://www.youtube.com/watch?v=elw0I9CIIPo

  9. In the video posted below ‘Rebecca Saxe: The Brain vs. the Mind’ cognitive neuroscientist Rebecca Saxe explains what the difference is between the brain and the mind, and why this matters in our understanding of human nature. It’s also a nice tie in with the main topics of this podcast, morality, psychology, and Trolley Problems.

    https://www.youtube.com/watch?v=vLcdKXE4R0s

  10. Great discussion!

    The point that utilitarianism is fair because every perspective is seen as equally worthy is something that I see complexities to. I’m remembering this cartoon about the difference between equality and equity: https://static.diffen.com/uploadz/3/37/Equality-equity-justice-lores.png I could imagine a society that was truly and effectively organized around maximizing the total, combined happiness (and only the total, combined happiness) of all participants, and I could imagine that society being experienced as reasonably close to paradise by 60% of participants, & experienced as anywhere from ‘just ok’ to horribly lonely and isolating by the other 40%. I worry that there’s a danger for utilitarianism to be used by bad actors as a way to make bullying, scapegoating, and ostracization look like a respectable and ‘natural’ pursuit of ‘the greater good’ because it’s ‘just’ an individual or a minority group who’s suffering (even without going so far as to insist that the targets should literally die for the greater good). Maybe the more robust and thoughtful kind of utilitarianism discussed here cares more about that kind of thing as an outcome that isn’t actually good, & is less interested in deciding on principles and following them down any elevator to hell that opens up in the process, ha ha.

    I don’t know if there’s an academic term for this, but it makes sense to me to treat the total happiness of everybody as only one of several possible things that should be maximized. Like how in statistics, you might care about the mean, the median, or the mode depending on what kind of understanding you wanted from the data. I could imagine deciding it’s important to maximize the total happiness of everybody, and also deciding it’s important to try to maximize the number of individuals whose level of happiness is above some threshold (and coming up with various other metrics to control for other situations). Or is there a danger there that you’d just end up with the society from The Ones who Walk Away from Omelas by Ursula K. LeGuin, where society is a paradise for 99% of people & then there’s one profoundly unhappy person in the basement, whose needs managed to slip past every imperfect, well-intentioned metric? Morality is hard!

  11. Jackie, thanks for the posting the great cartoon ‘EQUALITY VERSUS EQUITY’. By definition utilitarianism is the doctrine that an action is right as it promotes happiness, and the greatest happiness of the greatest number should be the guiding principle of conduct. Whereas by definition equality is the quality or state of being equal: having the same rights, social status, etc., and equity is fairness or justice in the way people are treated.

    As far as which is more important, promoting happiness for the greatest number of people, or making sure that all people are treated equally, to me is a no-brainer. The goal of any nation, culture, society, should be to make sure all its members are treated equally, and to do what it can to make sure its members treat others not in their group with respect and dignity. Utilitarianism concerns about the greatest happiness of the greatest number should be, at best, of secondary importance.

  12. Unfortunately the Giving Multiplier website wouldn’t take me to the page where I could complete my donation. I hope others have had better luck using the website.

  13. Alan Grinnell Jones

    “The meta-morality that I favor has historically been known as “utilitarianism,” but that’s a very bad name for it. I prefer to call it “deep pragmatism,” a name that gives a clearer sense of what it’s really about. Deep pragmatism boils down to this: Maximize happiness impartially. Try to make life as happy as possible overall, giving equal weight to everyone’s happiness.” 11/7/2013 interview with Greater Good Magazine

    Having read “Moral Tribes” and listening to this interview, I still don’t know whether Joshua Greene’s “deep pragmatism” is an intentional confusion with pragmatic ethics or just inadvertent. Also, as he sees morality as a natural phenomenon, why does he ignore John Dewey’s cultural naturalism (Dewey’s pragmatism)? For a good summary of pragmatic ethics see: https://www.hughlafollette.com/papers/pragmati.htm

Comments are closed.

Scroll to Top