227 | Molly Crockett on the Psychology of Morality

Most of us strive to be good, moral people. When we are doing that striving, what is happening in our brains? Some of our moral inclinations seem pretty automatic and subconscious. Other times we have to sit down and deploy our full cognitive faculties to reason through a tricky moral dilemma. I talk with psychologist Molly Crockett about where our moral intuitions come from, how they can sometimes serve as cover for bad behaviors, and how morality shapes our self-image.

Support Mindscape on Patreon.

Molly J. Crockett received her Ph.D. in Experimental Psychology from the University of Cambridge. She is currently Associate Professor of Psychology and University Center for Human Values at Princeton University. She is a Fellow of the Association for Psychological Science and the Society for Experimental Social Psychology.

0:00:00.0 Sean Carroll: Hello, everyone, welcome to the Mindscape podcast. I'm your host, Sean Carroll.

0:00:03.7 SC: And I don't know about you, but most people I know think of themselves as good people, as the good guys. In the cosmic struggle between good and evil, most of us think that we're trying to be good. This is a flaw in many movie villain scenarios is that they're Dr. Evil types, they're trying to be evil, whereas most people don't think of themselves as evil, people think of themselves as good, and yet they end up disagreeing about what it means to be good and how one should act if one is good.

0:00:35.3 SC: So there's a philosophy question here, obviously, what does it mean to be good, to be moral, to do the right thing. And we've had discussions about moral Objectivity versus constructivism, or what have you. There's also a psychological and neuroscientific question about why people want to be good and what is happening in them psychologically when they think they are being good. That's where we are today. In today's conversation, we're talking to Molly Crockett, who is a psychologist and neuroscientist who studies morality as it is actually practiced in human beings, so not necessarily trying to decide what you should do, but understanding why people are trying to do the things that they should do or how they come up with the idea of what it is to do the right thing.

0:01:25.4 SC: Is it ingrained in us? Do we learn it? Is it something where there are evolutionary explanations for this? And very importantly, how can we change our personal views on morality? We do learn things as we grow up, maybe there are things that when we were kids we thought were perfectly moral behaviors, and now we've changed our minds. And as Molly points out, there's a very crucial modern question about how technology and communication in the digital age is changing the way that we think about morality. We're faced with a set of circumstances and an environment that we were not evolved to understand, where things are very, very rapid in how they appear to us and how we can respond to them, and they are manipulated by politicians and algorithms and so forth ginning up outrage or, for that matter, bringing us together in different ways.

0:02:19.7 SC: So there's a mixture of not just morality, but empathy, ethics, politics, and so forth. So again, it's a very, very Mindscapey kind of set of questions that come together when you say: What does it mean to say that we are moral, why are we doing that, how does it actually play out? And Molly is not just a psychologist who collects data from a lot of different ways, psychology experiments, watching people in the field and so forth, but also has a bit of a philosophical background. In fact, we met at a philosophy conference, even though neither one of us has a degree in doing philosophy, because philosophy brings everything together, so there's going to be some talk about that as well. So let's go.

[music]

0:03:18.4 SC: Molly Crockett, welcome to the Mindscape podcast.

0:03:19.7 Molly Crockett: Thank you, it's a pleasure to be here.

0:03:21.8 SC: So you think about morality, but rather than the philosophers we sometimes talk to, you're mostly a psychologist. I know that you hang with philosophers, that's okay, that's good. So therefore, I would presume that you're more interested in how we reason morally as a matter of fact, rather than how we should reason, like what is the ultimate true theory of what is right and wrong.

0:03:45.4 MC: I mean, I'm interested in both, and I think that ideally the work can find how the data can inform the theory and vice versa, but yeah, I think that mostly I'm in the business of the description as opposed to prescription.

0:04:02.6 SC: And so how does it work? Boots on the ground, as it were, are you experimenting, are you going into people's brains, are you giving them psychology tests?

0:04:11.4 MC: We use a lot of different methods in our research, ranging from asking people to read about hypothetical scenarios and make judgments about them, to experiments where people are making real decisions that have material consequences for themselves and other people, to naturalistic observation studies on the internet, like how people express moral outrage on Twitter, to going into the field and looking at how people's moral cognition changes at festivals like Burning Man and other types of transformative experiences. So really the whole range.

0:04:48.7 SC: Okay, you have to give us a little bit of a hint about how people's moral judgments change if they're at Burning Man than if they're at home.

0:04:56.0 MC: Well, in that line of work, we have been interested in a phenomenon we call moral expansion, which is about the orientation of morality: Is it directed towards your near and dear or is it expanded to encompass all of humanity? And what we find is that with every passing day that people spend at an event like Burning Man, the more they feel transformed and the more expanded their moral circle is to encompass everyone.

0:05:28.8 SC: Well, that's actually very interesting, and is that... Do we attribute that to the sort of common feeling of being in the group, that's it's some kind of festival, or is it more there are certain substances one imbibes in these instances that change how you think?

0:05:44.2 MC: We don't have a very precise answer yet, and of course, there are any number of factors that can contribute to moral expansion at multi-day mass gatherings like Burning Man that we've studied. The sociologist Emile Durkheim deer coined the term "collective effervescence" to describe the feelings of a really joyful, exuberant, expansive connection that we feel when we come together in large groups for a common purpose. And we think that that has to be part of it.

0:06:15.5 MC: It is true that people do consume mind-altering substances at Burning Man and other mass gatherings like this, but not as much as you might think, in fact, and in our studies, we really carefully controlled for the substances that people reported using. And what we find is that yes, as you would expect, people are more likely to report having a transformative experience after recently using psychedelic substances. Over and above any type of substance use, we find that the more time people spend that these collective gathering events, they feel transformed and positive emotions and so on.

0:06:57.0 SC: We should mention because you're using the phrase transformative experience that we did a podcast interview with Laurie Paul back in the day, so anyone who wants to really dig into the philosophy of that can check that out. But I like what you're saying here, even though this is not at all what I planned to get into first, but we have this feeling, I guess, that there's a mob mentality that can take over, and usually that has negative connotations. So in some sense, you're giving us the positive side of that communal feeling that we can get.

0:07:27.8 MC: Yeah, and I think that what we can learn when we take together our research and other similar research on the positive effects of mass gatherings, as well as the dark side of mass gatherings, ranging from mobs, as you say, to Nazi rallies in the Third Reich, to more recent rallies on the right that we've seen in the US, we are story creatures, we are really, really responsive to the narratives that we tell ourselves and others about human nature, and I think of mass gatherings as amplifiers. So whatever the story that's being told, the culture of a mass gathering, in terms of how we treat each other, what is our view of humanity, is it basically good, is it basically evil. Whatever that story is being promoted, the mass gathering is going to amplify it, and that builds a culture that that is going to profoundly shape people's thoughts, feelings and behaviors.

0:08:32.6 SC: That's a very interesting point. We also recently had Michael Tomasello on the podcast, and of course, he pushes this idea that the social abilities of human beings is really what sets them apart or enables all the other things that set us apart, and I guess morality is part of that. I'm guessing, what do I know, I'm a physicist, but we can feel connected to other human beings in special ways due to those social capacities.

0:09:00.3 MC: Absolutely.

0:09:01.6 SC: And okay, with that on the table, that was very interesting, I just couldn't help but ask, but let's back up and be a little bit more basic. What do we mean when we talk about morality? Do we even agree on what that's supposed to mean?

0:09:15.3 MC: Oh, no.

0:09:16.8 SC: That was a leading question.

0:09:16.9 MC: I hate that question, actually, because I don't have a good answer for it. And actually, morality, morality is a more specific term, oftentimes. I am shifting more towards favoring the term norm psychology or normativity, which concerns how people think about what is appropriate or inappropriate, what is allowed or disallowed within a society. And of course, these are all social constructions. We know from research on the continuity between social cognition and animals and what we have as humans, which Michael Tomasello is one of the leading scholars on that, that there are... You can trace a line between what you might call moral sentiments in humans and things like empathy, care and concern in other mammals, but aside from those really, really basic domain, general building blocks of normativity or morality, there's tremendous variation within and across cultures in terms of how we think about what is right and wrong, how we talk about it, how we enforce it, how we learn.

0:10:35.6 MC: And so I'm really, really interested increasingly in that social construction, that cultural side of norms and where do they come from, and do we have control over them, and crucially, how is technology changing the way that we think about right and wrong, the way that we enforce moral norms on other people and internalize those to regulate our own behavior.

0:11:02.9 SC: That's good, because we're going to answer all of those questions in the next hour, so I'm glad that you have raised them. But it's a very good point to switch from sort of morality talk to norm talk, because morality talk does kind of presume we know what is moral and what is not, and people disagree about that. Is the disagreement... I guess, what does it all start in development, when we're kids, are we born somewhat moral creatures, or at least with something we would recognize as norms or are those taught to us?

0:11:33.5 MC: There's still a lot of debate about this. We are born with a capacity to learn, particularly to learn in a social sense, so learn through observation and learn from social feedback. So if you ask scholars of human nature what is most distinctive about humans, it's not necessarily morality or norms, but rather our unbelievable capacity to learn socially, and not just to learn and adapt within our lifetime, but to transmit through culture knowledge from generation to generation.

0:12:19.0 MC: And I think that it's not a coincidence that we are such norm-focused creatures, because social cohesion is really important for survival, cooperation gives us an advantage over other species. Many people have written about this, and this is not directly what I work on, but I think, in particular, the fact that we care so much about what others think of us and our well-being is, for most people, is really, really tied to our sense of how am I fitting into the social world, what is my social standing.

0:13:02.1 MC: And so our ability to represent and learn about norms is a really crucial part of that calculus that we are all making all the time, like, am I doing the right thing? Am I fitting in? Where we get into trouble is where the question am I doing the right thing gets disconnected from actual flourishing and welfare of humans and other species. And I think in particular, what we're seeing today as there's just massive amounts of normative discourse on social media, what we're seeing in real time is this really, really almost rapid cycling of proposals about new things we should hear about, backlash to those proposals, backlash to the backlash, backlash to the backlash to the backlash to the backlash, and so on ad infinitum.

0:14:00.8 MC: And so I'm really interested in this process, starting from the seeds of moral cognition that we can see not just in babies, but also in other mammals, empathy and altruism, but then the layers that get put on top of us as we learn and grow and talk to each other, and talk to each other about what we're talking about and so on, and it's fascinating and also deeply troubling, so it's a really intense time to be studying this kind of thing.

0:14:42.1 SC: There's always some time lag between when we record these and when they're released. But as we're recording this, the thing that is going crazy on Twitter is gas stove discourse. I don't know if you've listened in on gas stove discourse, but there's this idea that...

0:15:00.2 MC: I have not.

0:15:00.3 SC: That government is going to ban gas stoves in our houses. They're not, of course, really, but they have pointed out, some people have pointed out that there are health challenges, that you can have deleterious health effects from having gas stoves, and so maybe we should try to minimize them or try to control them or whatever, and so this is ginned-up outrage, because people are like, "You're never going to... You'll drag my gas stove out of my cold dead hands." I'm sure that it'll be forgotten a week from now.

0:15:28.1 MC: This is such a good example of how the way that social media algorithms are designed, which is first and foremost to grab our attention in order to keep us online for longer, in order to make the most money. And by definition, the stuff that captures our attention is information that's really extreme, that provokes outrage, even if that information is super far-fetched or not representative at all of how most people think or what's likely to happen in the future. And so then you have this discourse and meta-discourse around pieces of information that may be very, very divorced from things that will actually affect our lives.

0:16:15.1 MC: And I'm really interested in understanding more about why this happens and can we do something about it. And if so, what should we do about it? Because I get the question all the time, and I'm going to preempt you asking this question, which is like how should we redesign social media, and it's like, it's a hard question, because it is a shit question, it's a prescriptive norm kind of question, and to answer that question, you need to not only understand the dynamics of how social media as it is currently designed is shaping discourse and behavior, which we still... This is such a new area of research, but then also what is your meta-ethical theory of what is the right way that society should be set up. And that's all really, really tricky stuff.

0:17:09.0 SC: That is a difficult one, and I will add in there to the difficulty, the fact that these features of social media can be weaponized by political actors, right, they will exaggerate the extent to which a certain bad thing is happening because it gets their base outraged. That's what they want.

0:17:26.0 MC: Yeah.

0:17:28.1 SC: Okay, so there's a bunch of things that went by there in your very nice discussion of that about... You're talking about morality in a way that as a psychologist gets down to how people act and how they talk about how they act, etcetera, not in some objective, moral, realist sense. I mean, probably... Well, let's put it this way, it makes perfect sense that norms developed over time through evolution and things like that, and we give them the label of being moral rights and wrongs. Do you feel a need to take a stance on moral realism versus relativism or constructivism or something like that?

0:18:06.1 MC: I mean, I do feel a need to take a stance, and I also feel under-educated in the ability to take a stance. I'm not trained in philosophy, I wish I was. I actually... Philosophy when I was in... When I was in high school, I dreamed of going to university to take philosophy, and I signed up for a philosophy class my first semester. I think it was a philosophy of mind class. I don't remember who the professor was. I went the first day and I was like, What is this? I want something real to grasp on to, where is the data? So I majored in psychology and neuroscience and have been really at home in those fields, but increasingly gravitating towards questions around morality and normativity, and as my career has progressed, I have come to increasingly identify as a philosopher, and I think I'm reading as much philosophy now as I'm reading psychology and neuroscience, and I just find it really, really rich and helpful.

0:19:14.8 MC: And so one of my to-do items that I hope to have time for at some point is to audit a moral philosophy course or somewhere, 'cause I really... Like my knowledge is not top down, it's magpie-ish, I have these little bits and pieces of understanding, but I don't feel like I am educated enough to take a firm stance, but I want to be more educated to be able to move towards that.

0:19:48.3 SC: Have you heard of... And this is very unfair, given what you just said, but have you heard of what is called the evolutionary challenge to moral realism?

0:19:56.2 MC: I think so, yes. I think so, yes.

0:20:00.0 SC: Basically, the idea that we would, through evolution, develop norms and think that there are certain right things and wrong things to do, and wouldn't it be weird if those just happen to coincide with some objective moral reality, like that seems like too much of a coincidence.

0:20:17.3 MC: Right, yeah, that's really interesting.

0:20:17.4 SC: But anyway, personally don't need it. You don't need to care whether or not there are objective moral standards out there, if what you're just interested in are how people think about morality themselves and how they justify it, I'm guessing.

0:20:29.6 MC: Well, yes and no. I used to think that, and I am increasingly becoming skeptical of this divide that exists between science on the one hand and values on the other hand. It's a pretty common sentiment in moral psychology that like, "Oh, our work cannot speak to normative questions, we're doing the description, you can't get an ought from an is," and so on. And at a surface level, that's true, but also, our values infect the way that we choose to ask questions, the questions that we ask.

0:21:12.9 MC: So just to give a concrete example, a very popular area in moral and political psychology right now is how can we get Democrats and Republicans in the US to talk to each other more, and how do we bridge these divides. Can we get people to meet in the middle and compromise? How do we get people to do that? And that is a project that has values in it, in the sense that there is a goal to get people to meet in the middle, and on the surface, that seems like a really great goal, and in many, many, many cases, it is a really important goal, and what if that goal is applied to getting Nazis and Liberal Democrats to meet in the middle, like... No.

0:22:18.1 SC: If one side is just right, then meeting in the middle is not the right thing to do.

0:22:22.4 MC: I just think about like especially when you're studying questions of morality and ethics, it's really important that we understand how our own values shape the science that we do. And we are not trained to do this in science, we are told through our training in science that science is objective, and science is free from values. And we are humans, and so, of course it isn't. And so a lot of the work that I'm thinking about now is, how should we be thinking about the scientific enterprise in a time where values are so... They've always been important, but increasingly science is looking at questions that are very laden with values, and so it's a challenge, for sure.

0:23:17.4 SC: Well, no, I think it's a really good point, because a lot of the times we say, well, we're objective, we're not depending on this or that values, but really what we mean is I know what the right values are and I'm going to use them, and most people agree with me, so I think I can get away with it. So do people... Is there... So I think that you've put your finger on this distinction we draw between values and objective truth and so forth. There's a bunch of distinctions out there that I wanted to ask you about. I mean, one is this idea that morality serves as a check on how we would really like to act. Like as a psychologist, is it accurate to think of people as like these bundles of terrible impulses that morality sort of sits on top of and prevents us from doing all these terrible things?

0:24:08.3 MC: No.

0:24:09.0 SC: Okay.

0:24:09.2 MC: That's an easy answer. And that's not to say that we are fundamentally angelic either. What we are fundamentally is social learners, and the beliefs that we have about other people, our group, our culture, really impact the way that we comport ourselves intentionally and deliberately, which is on top of or in concert with basic instincts that we have, which include a basic instinct towards empathy and altruism. And so there's some really beautiful work by my colleagues Julia Marshall and Paul Bloom, and this goes back to the universalism versus more parochial morality.

0:25:06.6 MC: And if you look at very, very young children versus older children versus adults, and how people show favoritism in terms of thinking there's an obligation to help family and friends more than strangers, that's something that develops with age. So very, very, very young children think you are equally obligated to help a stranger versus a friend or family member in need.

0:25:34.9 SC: Interesting.

0:25:37.0 MC: But it's... This favoritism towards people who are more close to us isn't something we're born with, it's something that we learn from our culture. And so that's just one of many examples of like there is a basic goodness that people have. We learn through interacting with others and from our culture that there are other ways of being. But this myth of selfishness, this idea that homo economicus, like people are basically selfish, everything can be explained in terms of self-interest, this is a narrative that is not grounded in science, and I will add, is very convenient for capitalism.

0:26:28.0 SC: I noticed that, yes.

0:26:29.6 MC: Because what is the point of caring about other people if no one else cares? And it's really easy to fall into these self-serving narratives about human nature, and I worry about the broader implications of those narratives. What if we're telling stories about human nature that A, are over-simplified and not really grounded in scientific data and B, create self-fulfilling prophecies where it's like, oh, everyone is selfish, so there's no point in me wasting my time to join a movement or demand better treatment for people who are less well off and me, and so on.

0:27:16.6 SC: Well, I have absolutely heard and am familiar with the idea that we're not inherently selfish, that's pretty clear. We have selfish impulses, we also have altruistic impulses. But I think that your claim that or your idea, maybe, I don't know how much of it is your work and how much you were quoting other people, but our sort of sphere of caring shrinks a little bit as we learn and age. I mean, Peter Singer talks about the expanding circle, as we get smarter, we care about more people. Nicholas Christakis was on the podcast and has an idea that it is just social beings naturally have an in-group bias. But you're saying that... I mean, how dramatic is this claim? Is there no in-group bias when we're born, is that all given to us by culture, or is that something we don't know?

0:28:05.9 MC: So I'm glad you asked, 'cause I think there's a nuance here that's really important. The work that I'm referencing I think is concerning how people feel about social applications towards close versus distant others within a group. There is also separately an in-group bias that is naturally... That seems to be pretty universal and is detectable from a young age. And so I think that the puzzle or the challenge as we are in a global society where the challenges that we face, pandemics, climate change, etcetera, are global in nature, like how do we conceive of humanity as a giant in-group, and that is a challenge.

0:29:00.9 SC: And I guess this leads right into something I also wanted to ask, which is how much do our moral impulses... I don't even know what to call them. Moral inclinations, impulses, intuitions, not quite our formal philosophy class moral systems, but our feelings, how much do they change over time, and how much can we change them intentionally by thinking, oh, my morality was wrong, I gotta change it.

0:29:24.8 MC: Great question, and I think there are a lot of different directions we could go in answering it. So the short answer is that I think we come with a sort of basic toolkit of moral sentiments like empathy, vicarious joy, an attunement to the pleasures and pains of other people, and the learning that takes place on top of that is like how do we direct or tune or shape those basic sentiments to the social context, right. So we are not born knowing who is in the out-group, this is something that we learn, although there is some evidence for sensitivity to basic cues like language and appearance and so on, but political hatred, for example, is certainly something that is learned, and we learn to direct our empathy more towards in-group versus out-group members on the basis of ideology, for example.

0:30:39.4 MC: And I do think that there is scope for changing the cognitive mechanisms of normativity of morality on a cultural scale, and I think that two mechanisms through which this would work or two channels might be what we could call representation and reinforcement learning. So representation, we are conformists in the sense that we learn through observation. When other people are doing stuff, we also feel inclined to do stuff. And one thing I've been thinking about in our work on social media discourse is like if algorithms promote a particular type of moral frame, and that's what everyone is seeing disproportionately in their feeds, can that influence the way that we think about over time, moral situations.

0:31:47.2 MC: So for example, if there are archetypes of villains where stories of particular types of people committing bad deeds are more likely to be elevated, and we all see that, do we then lower the threshold in our own lives for seeing villainy from those types of people because that's what we're used to seeing in our feeds. And the reinforcement learning side is like, to the extent that we participate in the discourse, and then the algorithms reward us for that form of discourse, like outrage, the more outrage we express, the more likes and retweets we get. And also, of course, the algorithms are learning to show the content that we post to the audiences that are most likely to comment on it. And so what we have now that we've never had before is algorithms intervening on social learning processes and cultural evolution in a way that I think is new.

0:32:57.4 MC: And you know, in sort of discussions of technology, there's always a question, it just really new, though, like what about the printing press? What about mass media? And yes, a lot of the effects that we're seeing are more of the same, and it's really important to consider all of the scholarship in media studies that predates the internet age. And I think that what is happening with algorithmic intervention on the discourse that we are having with one another is something that's new and is something that I worry about, because if you have an algorithm designed to make money for tech companies that is rewarding certain types of expressions, that's rewarding certain types of stories that we tell about right and wrong in our culture, then it seems obvious that through reinforcement learning, which is a really basic process that even sea slugs have, we're going to change our behavior over time and from generation to generation.

0:33:56.4 MC: What is it that we are teaching new generations about how to think about right versus wrong, and what does that mean for the future of politics and our ability to cooperate and solve all of these really thorny challenges that we face. So these are the things that keep me up at night.

0:34:16.0 SC: And even, maybe this is just too obvious, but even just the speed of the interactions, right. I mean, when I put a tweet up, within half an hour, I know what the reaction is. When I write a book it's going to be months. This is very frustrating.

0:34:29.3 MC: Yeah. Yeah, and the speed is a really, really interesting aspect of it, and I think that the speed has contributed to two things that are in opposition to one another. On the one hand, you have a rapid cycling of what you might call moral learning or moral evolution, where someone identifies a harm that hasn't been identified before, maybe it's gas stoves, maybe it's... Maybe it's something else, and then there's discourses about it, and if that were taking place over a typical a pre-internet age timescale, there would be time to sort of digest and reflect. And what we have now instead is like, whoa, what is happening?

0:35:26.9 MC: It's really important to me to not be a shitty person and to know how to avoid harming others, and there's this constant bombardment of new things that we need to know about. And I think that one reaction is just, this is illegitimate. I think there is a backlash that comes from the speed, because we are used to social or moral progress or change happening over a course of decades, and what we are seeing now is a much, much faster timescale. And I think that there are some people who are just responding to this as if this is illegitimate. And the question of whether it is legitimate or illegitimate is sort of separate, but I think that a lot of the backlash against like, oh, this is just woke cancel culture, like this is a very common refrain, like this is not real, this is just woke cancel culture.

0:36:34.4 MC: Like, let's unpack that. What do you mean? I think that there is something happening that I want to understand better, where there's this nihilistic kind of attitude towards new proposals about how to treat each other better. It's like, I just learned about it yesterday, therefore this can't be real. And I think that's a really interesting question to ponder.

0:37:01.6 SC: Yeah, the speed and the algorithms, these are two things working in concert in the social media world that are things that we did not evolve to understand and deal with very well, so it will be interesting to see where we go forward. There is a related question there... Basically, what I'm going to say is there's something that I've said many times, and I don't know whether it's right or not, so I'm going to ask, to fact check myself with you, which is as someone who is not a moral objectivist, a moral realist, I identify more as a moral constructivist, I think that we have moral inclinations, but then I also claim that... And what morality is, is just sort of a systemization of those moral intuitions and inclinations, but then I also claim that we can think about it, we can do cognition, we can rationally reflect and we can update and adapt our moral inclinations.

0:37:52.4 SC: So even though the wellspring of our morality is something built into us or learned at such a young age that we can't think about it, that doesn't stop us from being rational about it as we mature and hopefully think more wisely.

0:38:08.0 MC: Yeah, I think that that is reasonable. I think that the project of cognitive science is really to understand what are the constraints on reasoning and learning that can align or disalign a goal to progress, and how you define that is tricky, and it's broad, but I think a lot of times we think we're reasoning, but actually, it's motivated, right? So how do you separate the process that I think most people have, which is like, I want to learn and I want to act in a way that is compatible with my own flourishing and the flourishing of people I care about and also the world in general, like how does that goal get shaped in ways that we can't see by systems of power that trace back generations and act invisibly and act in ways that are in many ways designed for us not to see them, right.

0:39:33.6 MC: So I think a lot about how our whole moral cognitive apparatus seems to be really set up to identify victims and villains, and all of the research in moral psychology is about how we judge the rightness or wrongness of individual actions, and all of the stories that we consume in the media about wrong-doing are pretty much, like here's a bad person who did a bad thing and is bad, and they should be punished and so on, but what about systems? What about systems that constrain people's choices such that they can't help but act in ways that cause harm to themselves and others, like how do you blame a system?

0:40:27.7 MC: We are not... Our minds are not really designed, I think design is the wrong word, but we don't have a good conceptual system for doing that, but it seems increasingly... And I think the pandemic is a really good example of this. There was an article by Ezra Klein that came out I think some time in the first year of the pandemic, that really influenced me. The title of the article was There Are No Good Choices, and within psychology, we had been talking a lot about how do we get people to social distance, or how do we get people to wear masks, how do we get people to individually behave in ways that can not infect other people and prolong the pandemic. And it was this very individualistic focus of individuals are bad if they don't wear masks, and if they don't socially distance, but that's totally ignoring the fact that our society was not set up to deal with a pandemic, and our governments did not give us any good choices to live our lives during this phase.

0:41:38.2 MC: And so I think a lot of the questions that we focus on disproportionately in moral psychology and in ethics are like a red herring. A lot of this is not about individuals, it's about systems, and so how can we think more about that integration.

0:41:55.8 SC: It's a fascinating point, because it's parallel to the discourse about effective altruism. We had William MacAskill on the podcast, and my initial attitude is, well, if I'm going to be an altruist, I would rather be an effective one than an ineffective one, that just seems to make sense. But I do feel the strength of a critique that says they're putting all of their emphasis on charitable donations and individual good actions and things like that, and even if you agree with it, it is distracting or covering up the much more severe systematic problems that we might imagine dealing with.

0:42:37.7 MC: Yeah, yeah.

0:42:39.7 SC: And maybe it's even... Is there a dark side of morality? Is morality something that is purposed to cover up things that we might argue are bad? You mentioned sort of retribution as something that is part of our moral calculus in some way, punishment for evil deeds. And I've read you say that it's interesting, people will say we need to punish this person to prevent them from doing something bad again or from deterring others, but really, we kind of like the punishment, we kind of like the idea that these people are being punished.

0:43:11.4 MC: Yeah, yeah, there are so many fascinating questions in thinking about motives for punishment, and we've done a lot of work on this in various guises. But the broader point from the study you mentioned is that most of our behaviors are multiply determined, like we do something and there are multiple modes that are guiding us towards a particular choice. And then when we explain that choice afterwards, there are social forces that incentivize us to pick the reason that's the most socially desirable to explain our behavior. And I think over time, this creates narratives that can be self-serving.

0:43:54.0 MC: So in the case of punishment, we do have a taste for revenge, it is something that I think evolved in order to, at the group level, like keep the social norm machine grinding along, but for individuals, it's not clear that revenge and retribution are helpful either for teaching people in the long term to change their behavior for the better or for individual well-being. So there's this idea that revenge is sweet and it's satisfying, but actually there's data is showing that if you hold on to grudges and revenge, that's harmful to your well-being as well.

0:44:44.4 MC: And so we did these experiments where basically we gave people an opportunity to punish someone and they would never know that they had been punished. It was sort of like the economic game equivalent of waiters spitting in a customer's food in the kitchen before they bring it out, so in this... Like people are willing to do that, they will pay money to reduce the pay-off of somebody who's behaved unfairly towards them, even if they never find out. But then when you ask them to report afterwards like, oh, why did you punish, a lot of people are like, oh, I wanted to teach a lesson, even though in some of these cases, there is no lesson being transmitted.

0:45:25.9 MC: And in the case of a lot of altruistic behavior, there's evidence from our lab and other labs, and just looking out in the world, like people, people launder ill-gotten gains to make themselves feel better. So the Sackler family donated a bunch of money that was earned in nefarious ways, and I think also in the domain of climate, greenwashing is such a prevalent issue where you have these fossil fuel companies who really should be re-investing their resources into... Like away from fossil fuels, but instead they're spending a lot of money on public relations campaigns like, oh, we really care about the climate, we really care about the environment, but meanwhile, behind the scenes are still very much trying to extract as much carbon from the earth and burn it as possible.

0:46:30.3 MC: And the science tells us that we just can't do that, like full stop, like no. So there are a lot of ways in which moral narratives are used to strategically direct people's moral judgments of reputation, about narrators away from the realities of the consequences of their choices, and we're interested in how moral narratives are essentially a a kind of, an epistemic power move, because what they do is they impose the narrator's preferred structure of causation and moral responsibility into the audience, and in many times the audience doesn't even know that that is happening.

0:47:21.0 SC: Okay, I'm totally stealing the phrase epistemic power move, I like that so much. That is absolutely capturing something real. And part of what it's capturing is even at just an individual level, forget about the corporations or whatever, people like to be a little judgy of other people, and maybe that's a tawdry impulse that we have, but if you can sort of dress it up as being morally righteous, then that makes you feel better, right?

0:47:49.0 MC: Absolutely. And there's some really nice work by Jill Dornan and colleagues showing that when you condemn other people, you are seen as more trustworthy, so there is a way in which moral judgments and punishments act as a signal to other people that you yourself would not behave that way.

0:48:09.1 SC: Okay, I can't believe that we've gotten this far into the podcast without mentioning the words deontology and consequentialism, these are the first words I always say when we talk about morality. We had Josh Greene on the podcast, and he has this idea that basically intuitively we're deontologists, there are rules that we think govern right and wrong behavior, but he says when you get smart and you become more cognitive and rational about it, you always become a consequentialist and you think through the consequences of your actions. I don't... I find that a little bit hard to believe, a little bit too pat. I'm probably over-simplifying a very complex story here, but is there some way in which these categories of moral philosophy map on to the ways that we think about it?

0:48:51.7 MC: I think that question is hard to answer, because it's sort of a question that's begged by the design of the moral psychology literature. And up until fairly recently, the vast majority of research in moral psychology has been looking at how people reason about utilitarian versus deontological approaches to moral dilemmas, largely thanks to Josh, who was a pioneer in this field, and he found this really, really tantalizing question that was very tractable to the existing methods in psychology and neuroscience.

0:49:25.8 MC: I have a slightly different view to Josh, which is that I think that there's good evidence to suggest that that deontological reasoning is highly rational in a social sense, in that deontological reasons and decisions are seen as more trustworthy by social partners, so if you think about the demands of social relationships, especially close relationships and the importance of relationships for our well-being and our ability to sort of function socially, most people don't want their spouse or their best friends to treat them just the same as any other person. And the entire premise of close relationships is a kind of prioritization, and so what we find in our studies is that especially when it comes to close relationship partners, people overwhelmingly prefer deontological reasoners over utilitarians.

0:50:36.6 MC: Now, that's slightly different when it comes to leaders or people who are in a role where it's their job to treat everyone equally. So we did some studies during the pandemic looking at trust in leaders who endorsed deontological or utilitarian approaches to moral dilemmas that arose during the pandemic. And here what we find is an interesting distinction. So utilitarianism can be divided into a positive dimension and a negative dimension. So the positive dimension we call impartial beneficence, it's the idea that everyone's welfare counts equally, and the negative side is instrumental harm, the idea that it's acceptable to harm one person in order to save many other people.

0:51:26.1 MC: And what we find is that across 22 countries that we surveyed during the pandemic, people prefer leaders who endorse impartially beneficent solutions to dilemmas around resource distribution. So we contrast a leader who says we should keep at home medical supplies, vaccines and so on, and the utilitarian leader is like, we should send those resources around the world where they're needed most. Aand across these 22 countries, people trust the utilitarian leader more, the one who says we should have this sort of universalist approach to resource distribution during the pandemic. But when it comes to instrumental harm, it flips. So in dilemmas like allocating limited ventilators towards younger people versus elderly people or imposing lockdowns that will trade off the welfare of different groups of people, we find that people distrust utilitarian leaders who say it's okay to sacrifice the welfare one group of people in order to save others.

0:52:49.4 MC: So there's this nuance where it's not the case that utilitarianism is broadly preferred in leaders. It depends on the nature of the dilemma. But going back to the question about is deontological reasoning irrational. I don't think that it's necessarily the case, and especially if we think about rationality in an ecological sense, I think of... I think of deontological intuitions as something that involved as a way to strengthen the fabric of society via close relationships.

0:53:30.3 SC: That's fascinating that we tend to trust people who are deontological thinkers or actors at the personal level, maybe not the leadership level. There was a funny tweet that went around a little while ago about someone who said, "I'm really looking forward to a super hero movie where the hero is a consequentialist, not a deontological," 'cause you always see in the movies, it's like, "Yes, I will save you, even though it means the universe might be destroyed," and I'm always sitting there... Maybe I'm over-trained, if I'm sitting there in the audience going, "No, actually, you should let your friend die because the universe might be destroyed." But that's not how we... That's not our impulses at a very basic level.

0:54:09.0 SC: But it's not just our views of other people that matter here. One of the things that you've been thinking about is how morality affects our self-image. No one thinks that they're immoral, do they? I tend to think that everyone thinks they're moral and maybe they make mistakes and then justify it after the fact, but that it has to interplay with our image of ourselves as trying to be good people in the world. How does that work?

0:54:38.0 MC: So I think that on the one hand, we know a lot about this, and on the other hand, there's a lot we still don't know. So what we do know is that morality is really crucial for our sense of self, our... You know, Strominger had some nice studies where you ask people, hypothetical cases where someone has lost their sense of morality or whether they have lost their memories, and are they still the same person. And what they find is that people are more likely to say, oh, that's a different person now if their morality has dramatically changed compared to if they've lost their memory. So we know that memory is really integral to our sense of identity, and as I was saying earlier in this discussion, getting excluded from the group is deadly.

0:55:34.6 MC: In our ancestral environment, it was literally deadly, you couldn't survive without the group, and nowadays it's still deadly, like depression and suicide and all sorts of health problems are associated with loneliness and social isolation. So it is deeply, deeply important to be seen as moral by other people. Here's where I think that we are really stuck with how our culture talks and thinks about morality. We have this narrative template of there are good guys and there are bad guys, and bad guys do bad stuff to good guys. And if you take together, if you juxtapose, there are good guys and bad guys, and you must be one of the good guys, like you can't be a bad... You cannot...

0:56:29.5 SC: No one puts themselves in that category, yeah.

0:56:30.8 MC: We have so many defense mechanisms to prevent ourselves from thinking of ourselves as the bad guy, and suffering happens, like shit happens, right? So those three things are incompatible, because if I am never the bad guy, and yet suffering is happening sometimes because of me, how do we deal with that? So I think that we're really... We are epistemically under-resourced to solve conflicts in our society, both at the interpersonal level and at the societal level. And it doesn't have to be this way, and I'm Buddhist, and I have spent many years studying meditation and Buddhist philosophy, and there are alternative ways of thinking about the self and thinking about social responsibility.

0:57:30.8 MC: This goes back to what we were talking about earlier around social structures as opposed to individuals being responsible for bad outcomes. And so I am really, really early in this project, but I'm very excited to think about the implications of changing the way we view the structure of the self and moral responsibility for interpersonal conflict and well-being and so on, because I think that the nut that we need to crack, so to speak, is this cycle that we're stuck in where like, oh, something bad happened, gotta find a villain, I'm not the villain, so you must be the villain. And that just is this vicious cycle that creates these intractable conflicts both in close relationships, but then also between social groups and in politics and so on.

0:58:25.1 MC: And so we have to find a way to loosen the grip of the moral self and binary views of good guys and bad guys if we're going to solve these really, really complex problems that involve systems, and a lot of suffering that we all really want to trace to one person, so we can blame them and punishment them and it will all be better, but that's not actually necessarily going to solve the problem.

0:58:52.0 SC: And blaming the president for the price of gas, like sometimes not only is it good guys and bad guys, but just the idea that there is a person to blame for things that happen is not always accurate.

0:59:04.9 MC: Exactly, exactly.

0:59:08.1 SC: And yeah, but okay, that's a big project to fix that, that sounds like something that is deeply embedded in human psychology. I was going to say the ancients could at least rely on mischievous gods and spirits to blame for bad things that happened, but...

0:59:22.8 MC: Maybe that's a better system. Maybe we should go back to that, I don't know.

0:59:27.1 SC: Well, part of it is that we're not good at dealing with randomness, either, like some things, some bad things happen, it's nobody's fault, not even the system's fault. Is that something we can train ourselves to be better? I mean, if you're interested in the practical side of things as a psychologist, like how much do psychologists come up with ways that human beings can train themselves to be better at avoiding some of these mistakes?

0:59:53.7 MC: I mean, we should do more of it, I think. I think that the incentive structure of academia is tilted more towards coming up with your own pet theory and promoting it and getting lots of other people to believe in it than actually solving problems.

1:00:08.9 SC: That's one of the flawed systems, yes.

1:00:10.6 MC: But education, education is I think the best tool we have, and I recently had the mind-boggling, still can't really believe it happened, privilege to meet His Holiness the Dalai Lama at his residence in Dharamsala. And we talked about all of this stuff, the challenges that we're facing, perpetual conflict, and why is it that we are so stuck in in views of us versus them, when really, we are all in this together. And one of his main agendas, it seemed, from those few days was education was everything, we really need... With each new generation, there is an opportunity, again, because we are such prodigious social learners and because culture, just as much, maybe not, maybe even more than nature and nurture is going to shape our behaviors and attitudes towards one another, and so with every new generation, there are opportunities to give people the tools and resources to think about themselves and others and problems in the world and morality in ways that might be better than what we have now.

1:01:20.7 MC: But the problem, of course, is that that discourse is shaped by systems that are not optimizing for well-being, they're optimizing for profit, and so how do we break free of that, that's like, that is a very, very thorny question.

1:01:41.2 SC: Probably need a sabbatical to really tackle that one comprehensively, but it also... There's a reflection of a problem we talked about at the very beginning of thinking of science as completely objective and so forth. I'm entirely on board with saying that education is good and there should be more of it, but there's education and there's education. What are we educating people about? It's not just the existence of education, but our educational system doesn't always prioritize the kinds of things I think that you're pointing at that would help people navigate these situations.

1:02:12.4 MC: And it also doesn't get enough money in this country.

1:02:14.8 SC: Well, that's also true.

1:02:19.0 MC: It's severely under-funded, and that is outside of my expertise, but clearly, clearly a problem.

1:02:25.9 SC: Well, near the end of the podcast, we always let ourselves be a little bit more speculative and wild, so you already mentioned...

1:02:34.9 MC: Have I not been speculative and wild?

1:02:37.4 SC: Not enough, nope.

1:02:37.5 MC: This whole time? I kind of feel like I have.

1:02:39.0 SC: Well, you were... We glanced at this, I wanted to come back to it, the idea of psychoactive substances. I mean, you mentioned that an injury, a brain injury can change a person's memory, but also their sense of morality. So can drugs or maybe even diet or something like that. I mean, what is that... How should we feel about that if taking a certain drug... Well, first I should ask you the question, can taking certain drugs change our views of morality?

1:03:07.9 MC: Yes, but it's not clear how long the effects last and how large the effect is. So there are many lab experiments, some of which I have done, that show you can sort of shift around people's moral judgments and behaviors if you change their neurochemistry acutely. But of course, our neurochemistry is being affected all the time by things like stress and our diet, things out in the world. So it's a moving target. I think there's an important distinction between what these substances are doing to the brain in real time versus the type of knowledge you can gain, or as Laurie Paul would say, like an epistemic transformation that can occur during an experience with a mind-altering substance, especially psychedelics, I think are the prime example of this, where they dissolve your senses... They can dissolve your sense of self, they can open up a really profound sense of love and connectedness and boundlessness with the entire universe.

1:04:20.8 MC: And many people who have this kind of experience report that it is epistemically transformative, it changes the way that you see yourself in relation to others. And then following on from that, it's personally transformative in that it can change your values. And so those effects are lasting well after the acute effects of the substance have worn off, and that's what I think is really exciting about the potentiality of psychedelic substances is unlike other pharmacological treatments in psychology, or in psychiatry, like if you're depressed, you'll get prescribed an SSRI and you have to keep taking that drug indefinitely to experience the effects, which arguably are maybe even not what they're cracked up to be.

1:05:13.1 MC: But with psychotherapy assisted with a psychedelic substance, there can be changes to the narrative of the self and the way that you understand reality that have potential, I think, to carry on well beyond the time that you're high.

1:05:35.4 SC: So that's extremely interesting, and I'm certainly very willing to believe that even after the neurological effects of the drugs have worn off, you report a different sense of self or whatever, but I want to question the self-reportedness of that. Is there evidence that people actually act differently long after they've done this? Maybe I'm just being overly cynical here, but I have this question about meditation and mindfulness, I know people who have been very into this and they claim it's very, very transformative. To me, they act the same as they always did, and so maybe, maybe I am not actually as perceptive. And so you're the expert here.

1:06:19.6 MC: Yeah, so you're right that quite a lot of the research that has been done thus far on psychedelics and meditation and attending mass gatherings like Burning Man, like a lot of that is relying on self-report. And a lot of the work that we are doing in the lab is bridging the self-report with behavioral measures that some people might take as more valid or convincing. So there's less data on this, but so far the data does not seem to be dramatically divergent from self-report data, although it is much easier to say that you think you've changed than to actually demonstrate change.

1:07:10.3 MC: And one new line of work where we're doing this is actually about the ability to intersect. So quite a lot of work suggests that meditation improves your ability to understand the inner workings of your mind, but the way that that has been measured has been self-report questionnaires, like how well do you understand your mind, and then after people go through meditation training, they say better. But how do you verify that? And so in my lab, there are several folks who are working on developing more objective measures of introspective accuracy, so we have people make a series of decisions and we can use computational models to extract the processes that they use to make those decisions. And then separately, we ask people to self-report, how did you make those decisions, and then we can see how well those match up.

1:08:07.7 MC: So I think that it's a really exciting time in cognitive science because there are lots of new methodological avenues for verifying the link between self-report and behavior, and I think, actually, that self-report... I'm just going to come out and say it, I think that self-report is underrated.

1:08:32.1 SC: Okay, good.

1:08:33.8 MC: And I think that it has been very popular in the behavioral sciences to not trust it, but I think that that is an over-simplified view, and I think that that we have data that will be coming out that calls that view into question. And I think the really interesting question is, how can we bridge the very, very rich reports that people can make about themselves and their understanding of the world with what we might call more objective measures, but understanding this relationship between subjectivity and objectivity is something that I'm really, really curious about.

1:09:20.6 SC: So at least tentatively, you are willing to believe that people who practice mindfulness or meditate are actually better at introspection than those who are not.

1:09:31.4 MC: We have not run those studies yet, but we are... We are planning...

1:09:33.8 SC: But at least it fits in with the fact that, the idea that the self-reporting is somehow accurate, if that's true.

1:09:39.6 MC: Yeah, well, and also the entire premise of clinical psychology is that psychotherapy is helpful and that people are able to report on their thoughts and feelings and learn to reason about them, and in particular, learn to recognize when the patterns are divorced from experience. So yeah, I think it's a really interesting set of question.

1:10:07.6 SC: I guess maybe I'm spoiled by... Or, you know, prejudiced maybe is a better word, by being very convinced that psychedelics and so forth can have great therapeutic effects and help with depression and PTSD, but then also, I hear people claim to get insights into the fundamental nature of the universe and cosmology and physics, so I know that those are all just kind of nonsense, so I have to be a little bit skeptical of these overarching claim.

1:10:41.3 MC: Maybe what they mean by that is different from how you understand it and how your understanding of the structure of the universe and cosmology is informed by your training and research as a physicist. And maybe what people mean when they say they understand the nature of the universe is more about how do I fit into this and what is the meaning of my life and what is the point of it all? So I think there's room for both.

1:11:12.9 SC: Think there's room for both. That's a very good... I can't argue with that. That's a very good place to end. So Molly Crockett, thanks very much for being on the Mindscape podcast.

1:11:20.5 MC: Thank you so much. This was a really fun conversation.

[music]

6 thoughts on “227 | Molly Crockett on the Psychology of Morality”

  1. It is great that you provide these transcripts of the episodes, but people with eyesight that is not really excellent can have trouble with them, because of the brain in the background. If you really must keep in there, could you please provide an option for the user to remove it? (That option at the same time might increase the contrast between the text and background.) Thank you!

  2. Hello Molly & Sean
    On meditation or mindfulness being transformative and Sean’s friends being seemingly unchanged, I would ask:
    “How much did they IMMEDIATELY attempt to use their new action space compared to others within the group, or other groups
    When trying to increase the range of mobility of a joint the most important practice to affect change is perhaps to:
    Immediately use that new space
    Immediately.
    You must engage the joint while it is primed for change to see any noticeable gains.
    There is a temporal component that can not be ignored.
    1-Reset – Alignment
    2-Restore – Open Doors to Potential Space
    3-Reprogram – Use That Space
    My assumption is that there is something like this going on with attempts to change the neurology of a brain instead of the neurology of a joint.
    How much reprogramming did Sean’s friends immediately engage in?
    Would it have mattered?
    Molly, I can not wait to see your paper coming out.
    Wishes
    John

  3. On the Morality of Psychology.
    Whose psychology? Behavioral? Kantian? Cultural? I am skeptical of the use of Evolution, and the idea that morality must be fitted to species survival is not morality, but it’s absence.
    Looking for a TOE morality, a Theory of Everything or universal morality is equally bankrupt.
    Substitute the word “magic, and nothing changes, Only works in context.
    btw, Burning Man is paid for ‘more expansion’. Somehow like a Yoga retreat.

  4. Science can’t tell you what objective morality is. Science can only provide data on what people report their moral values to be.

    Molly Crockett wishes she knew more about philosophy but doesn’t realize that philosophy doesn’t provide any means to discover objective morality either. Moral philosophy is just a collection of individual subjective opinions about morality. It can’t tell you what objective morality is because there is no objective morality, and the very idea that there could be is preposterous to begin with. Moral values and judgments may be individual, social or cultural and they differ widely among individuals, groups and cultures. But they are not imposed by the universe. In conservative Muslim cultures it is deemed immoral for women to show their faces in public while at the Esalen institute everyone can walk around naked. In the Russian Army it is considered “good” to kill Ukrainians. In Ukraine it is considered good to kill Russian soldiers and in Nazi Germany it was considered “moral” to exterminate Jews.

    So where does such wide variation in moral values come from? Despite what Crockett says, first and foremost, moral values come from self-interest. People spend much of their time sorting events and acts into “good” and “bad” things. “Good” things are things that benefit the individual or his family, social group or tribe and “bad” things are things that are harmful to the people in question or that they don’t like. People’s value judgments are always self-interested because self-interest always lies at the motivation for those judgments. People may value helping others, but that value is also a self-interested value judgment. Crockett misunderstands this because she doesn’t understand what self-interest is. Self-interest includes everything you value. You may value getting rich or you may value helping others, but both values are equally self-interested because they reflect what you want to happen. So moral values are, despite what Crockett, says always self-interested. She acknowledges the role of motivated moral reasoning and that people never think they themselves are “bad” but she doesn’t seem to understand that moral reasoning is always motivated reasoning.

    Crockett’s work invlvesexperiments in areas such as whether serotonin levels affect moral judgments and behavior. Of course they do. Moral judgments can be changed or influenced by any number of circumstances. But such experiments tell you nothing of importance about morality except that it is subjective and changeable and you don’t seed a serotonin experiment to demonstrate that self-evident fact. Crockett also takes a tired woke stance on “systems of power” that influence moral judgments and behavior. But this just means that culture affects and enforces group morality, a point that will come as no surprise to anyone. In short, Crockett’s insights seem scattered and unsystematic and she seems to lack the confidence to express views on where morality comes from a subject Thomasello covers well in his books. Fortunately, Sean has a far deeper understanding of how morality evolves than his guest and was therefore able to make an interesting dialogue with the well meaning Crockett despite her lack of depth and insight.

  5. Pingback: Sean Carroll's Mindscape Podcast: Molly Crockett on the Psychology of Morality - 3 Quarks Daily

  6. During the podcast Molly Crockett commented “So utilitarianism (the doctrine that actions are right if they are useful for the benefit of a majority) can be divided into a positive dimension and a negative dimension. So, the positive dimension we call impartial beneficence, it’s the idea that everyone’s welfare counts equally, and the negative side is instrumental harm, the idea that it’s acceptable to harm one person in order to save many other people.”
    That brings to mind the so-called “Trolley Dilemma” an ethical thought experiment where there is a runaway trolley moving down railway tracks. In its path, there are five people tied up and unable to move and the trolley is headed straight for them.
    People are told that they are standing some distance off in the train yard, next to a lever. If they pull this lever, the trolly will switch to a different set of tracks – but will kill one person who is standing on the side track.
    The people have the option to either do nothing and allow the trolly to kill five people on the main track, or pull the lever, diverting the trolly onto the side track where it will kill one person.
    The results showed that, overridingly, people in Europe, Australia and the Americas were more willing than those in eastern countries to switch the track, or to sacrifice the man, to save more lives.
    In Eastern countries such as China, Japan and Korea, there were far lower rates of people likely to support this ‘morally questionable’ view. …

    Ref: Trolly dilemma: when it’s acceptable to sacrifice one person to save others is informed by culture

Comments are closed.

Scroll to Top