220 | Lara Buchak on Risk and Rationality

Life is rich with moments of uncertainty, where we're not exactly sure what's going to happen next. We often find ourselves in situations where we have to choose between different kinds of uncertainty; maybe one option is very likely to have a "pretty good" outcome, while another has some probability for "great" and some for "truly awful." In such circumstances, what's the rational way to choose? Is it rational to go to great lengths to avoid choices where the worst outcome is very bad? Lara Buchak argues that it is, thereby expanding and generalizing the usual rules of rational choice in conditions of risk.

Support Mindscape on Patreon.

Lara Buchak received a Ph.D. in philosophy from Princeton University. She is currently a professor of philosophy at Princeton. Her research interests include decision theory, social choice theory, epistemology, ethics, and the philosophy of religion. She was the inaugural winner of the Alvin Plantinga Prize of the American Philosophical Association. Her book Risk and Rationality proposes a new way of dealing with risk in rational-choice theory.

0:00:00.9 Sean Carroll: Hello everyone, welcome to the Mindscape podcast. I'm your host Sean Carroll. A couple years ago, a little more than a couple years ago, episode 86 of Mindscape, we had Lord Martin Rees on the show. Martin is a very well-known and respected theoretical astrophysicist, but he's also someone who worries about the future of humanity. So we were talking about existential risks, the prospects of post-humanity, stuff like that. And naturally we talked about one example that was in the news not long before, which was the Large Hadron Collider, the giant particle accelerator in Geneva, which we don't remember now because it didn't happen, but people were worried at the time about the chance that you would make a tiny black hole or a new kind of particle that would utterly destroy the Earth. And physicists of course worried about this too. They don't want to die because they did some particle physics experiment gone terribly wrong, so they thought about it, they thought about it really hard and they said, as far as we can tell the chances of this happening are just incredibly, incredibly tiny, we should just go ahead. There were people out there who said, what do you mean incredibly tiny?

0:01:09.3 SC: How tiny does it have to be if we're really talking about the end of all life on Earth? And Martin Rees had an interesting rejoinder to that. He said, well there's also a probability that you will find a free energy source that will completely remove our dependence on foreign oil forever. This is very, very unlikely to happen at a particle accelerator like the LAC, but it's possible. And his point was, you know, once you're going into these tales of distributions where things are super, super unlikely, it's hard to say what probabilities are and hard to judge them. His judgment was you should just go on with something like the LAC. I thought that was an interesting argument, but it does beg some questions about how we deal about huge terrible risks or even how we deal about minor risks. When we are facing a choice between basically a safe bet versus something that might be okay but has a chance of being really good or a chance of being terribly bad, how do we reason about that choice rationally? How do we weigh the possible risk of things going terribly wrong? So today's guest, Lara Buchak, is a philosopher at Princeton University and she is the expert in the world on exactly this problem.

0:02:31.7 SC: She has a book called Risk and Rationality where she basically addresses some lingering paradoxes in the theory of rational choices. In the theory of rational choices, it's all about, I mean, of course if you have something that you value at 100 and something else you value at 50, then you're gonna prefer the one you prefer. You're gonna prefer the 100 over the 50, but when probabilities become involved, how do you risk or how do you weigh, rather, 50% chance of something you value a 100 and 50% chance of getting zero versus a 100 percent chance of getting something you value at 50? And of course there's a lot of discussion in the rational choice literature about what it means to value something. It's not just money. You certainly value the second million dollars less than the first million dollars that you might ever make. That's understood. That's okay. But there's a general feeling that there is some way to assign value to things and then when you're faced with a probability, you just take the expectation value. You take the average. You multiply the reward times the probability and that's how you should weigh it. And Lara's point is that no you shouldn't.

0:03:42.1 SC: There is a separate thing to keep in mind, which is how risk averse you are. So her point is that it's not irrational to either be super super averse to taking risks or to not care that much. It's not something that you can subsume into a rational choice calculation. It's a separate thing to keep in mind. People have values for different things that can happen. They also have an aversion to the worst possible thing happening. That is a risk aversion factor that it's okay to introduce into your theory about how to reason in cases of uncertainty. We're all faced with cases of uncertainty all the time. The LAC is an example, but where do you go for dinner? What job do you take? Who do you go on a date with? These are all things where you're weighing probabilities because you don't know exactly what's going to happen. So this kind of reasoning, this is philosophy that really helps you in the real world. In principle, so you'll see what some of the answers are and you may disagree, but that's the risk you'll have to take. Let's go. Laura Buchak, welcome to the Mindscape podcast.

0:05:09.3 Lara Buchak: Thank you. Thanks for having me.

0:05:11.0 SC: I have to ask, you know, when you tell people that your professional expertise is in rationality, does this put like a lot of pressure on you to never do something silly like, you know, forget your keys or things like that? Do you feel like you have to be the most rational person at all times?

0:05:29.0 LB: Yeah, exactly. In fact, when I tell people my specialty is decision theory, people who know me laugh because I'm terrible at making quick decisions. In fact, there's this joke that people study the thing they're bad at. So, you know, other people knew how to make decisions, so they didn't have to study it. I had to get a PhD and I still don't know.

0:05:50.3 SC: Yeah, I mean, famously moral and ethical philosophers are not always the most moral ethical people, right? We hope, but there's no real correlation there.

0:06:00.0 LB: That's right. Yeah.

0:06:00.3 SC: So what do we mean by rationality? Like, I think that there's a technical sense that you probably have in mind that might be a little bit different than the person on the street has in mind when you say something like that.

0:06:09.9 LB: Yeah, good. So there's two things.

0:06:11.9 SC: Two different senses of the word rationality, even in philosophy, and one is just purely consistency. So you can think of it by analogy to logical consistency. If you believe P and you believe P implies Q, but you don't believe Q, you're irrational purely in the sense of being inconsistent, regardless of what the content of P and Q is. So that's the kind of rationality decision theorists study. The other kind of rationality we might call substantive rationality, and an example of that is, you know, if you believe that aliens have invaded your body, that might be perfectly consistent, but we think that's not a good belief to have.

0:07:01.1 SC: Okay.

0:07:03.3 LB: Doesn't match the evidence or something like that.

0:07:04.5 SC: Doesn't match the evidence or something like that. That last one is, on the one hand, sounds more, I don't wanna say more important, but at least closer to the average person's thought about what rationality is, but at the same time maybe a little bit loosey or goosey? I mean, do we have quite as rigorous a definition of what that means?

0:07:21.1 LB: We certainly don't have a rigorous formal definition of what that means.

0:07:24.9 SC: So and you, are you studying mostly the first part, the nice logical philosophical part?

0:07:29.4 LB: Yep, studying mostly the first part. And it is, you know, maybe it seems a little bit far removed from what we initially care about, but if you sort of think about consistency and decision-making, the big question is, if you already know what you want and you know how likely various things are to happen, then what should you do? And that's really a question of being consistent with your beliefs and desires. So in that, when described that way, I hope it sounds like something that most people are in fact interested in. They want to know, I like these things, what should I do about that? Rather than which things to like. I can't help tell you which things to like, unfortunately.

0:08:14.7 SC: It almost sounds like something where, if you don't think about it too hard, you might get the idea that no one should disagree about this. I mean, is there a minority school of thought saying we should be inconsistent and irrational all the time? Or.

0:08:29.4 LB: Maybe a very small minority, but we don't actually know who they are.

0:08:32.2 LB: No, in fact, everybody thinks we should be consistent, but the real debate is about what consistency means.

0:08:39.7 SC: Right, okay. And you also mentioned decision theory, which is that the same? Is that a subset of being rational, or is that like a neighbor?

0:08:46.9 LB: Yeah, so we can have rationality in a number of different areas. There could be rationality in what you believe, rationality in what you desire. Decision theory is about rationality in what you choose or what you prefer. And really, it's about rationality in what you choose or prefer when you're in a condition of risk or uncertainty. So when you don't know what the world is like.

0:09:14.5 SC: Which is most of the time.

0:09:16.5 LB: Which is most of the time, unfortunately.

0:09:19.1 SC: Okay, so that is the tricky part. And, you know, I've talked to people like Julia Gallif on the podcast, the people who are sort of, you know, professional rationality boosters, but not within academia quite. And they talk a lot about Bayesian theorizing and updating on evidence and things like that. Does that come in also to either the rationality part or the decision theory part?

0:09:41.0 LB: Yep, yeah. So one part of decision theory. So decision theory involves, at the very least probabilities, which measure how likely you think various things are to happen and utilities, which roughly measure how much you like various things. And so the Bayesian updating, that all falls into the probabilities side. So one thing we have to know before we make a decision is what should my probabilities be? And especially given the evidence or how and how should my probabilities respond to evidence. So that's part of decision theory. That's not the part where I've argued for something that's different from the mainstream.

0:10:19.8 SC: And I guess, you know, still laying the groundwork a little bit here, how concerned is a philosopher like yourself with the reality of human psychology? Like, are you trying to describe how rational people actually behave or are you content to just think like this is how you should behave if you were rational? I don't know how people actually behave.

0:10:43.1 LB: Well, I'm trying to describe how people should behave, but how people actually behave is relevant because if we've sort of, if people behave very differently from the theory and we don't have a good explanation for that, then we're gonna want to investigate whether the theory is actually correct in that place.

0:11:04.7 SC: That sounds like remarkably like science rather than philosophy.

0:11:08.6 LB: Yeah well they're maybe coextensive.

0:11:13.8 SC: I'm happy to believe that myself. But so yeah, so in particular we might imagine that the armchair thinking about rationality leads us to a certain picture and then we notice people not behaving that way and then we go, oh actually we made a mistake, we can be better at describing what it means to be rational.

0:11:28.9 LB: Yeah, or at least we need to, but how people behave isn't directly relevant to what rationality is. It's just sort of like a datum and we either have to say, oh that's a way in which people behave irrationally and here's why that's irrational, or we have to incorporate it into our theory. So we won't know which thing to do until we investigate further.

0:11:54.6 SC: Is there any guidance other than consistency to what we would eventually dub to be rational?

0:12:01.2 SC: Not in decision theory, but consistency turns out to be a pretty difficult notion to say what it means and it actually turns out to have a lot of content. So consistency really dictates what you should do pretty strongly.

0:12:15.4 SC: Okay, that's very interesting. All right, so let's get into the meat of it then a little bit. It might seem once again, if you were very naive, you would say, what's so hard? I prefer A to B, I'm gonna do A, that's my decision theory, but where it becomes interesting is when uncertainties come into the game, as you already mentioned.

0:12:34.3 LB: Exactly.

0:12:35.7 SC: So how does that happen? How do we think in conditions of uncertainty?

0:12:38.7 LB: Well, so I like A more than B and B more than C and I have the following choice. Do I want to just get B for sure or do I want to have some chance of A but some chance of C? Which of those two things is better? And just knowing how much I like A, B, and C isn't going to tell me which of those two choices I should make.

0:13:00.0 SC: So what do we do?

0:13:00.8 LB: So what do we do? Well, first of all, we have to know or estimate the probabilities of A and C. So for example, maybe I'm...

0:13:11.4 LB: Let's put some numbers on this. Maybe I like $100 more than I like $50 more than I like $0. So I want to decide between just getting $50 or taking a coin flip of a fair coin between $100 and $0. So yeah, and so what should I do? Well, the first step is to figure out what decision theorists call my utilities. That's how much I like the various amounts of money. The way that we typically do that is to reverse engineer them from your preferences. So I know lots of your preferences over a lot of different things. From those preferences, maybe I discover that your utility function is linear in money. So you think $50 is better than $0 by as much as you think $100 is better than $50. Now, according to the standard theory, the one that I argue against in my book, that's actually all the information we need. Once we have your utilities and your probabilities, we can figure out exactly what to do. So on the standard theory, you should maximize expected utility or average utility.

0:14:41.5 LB: So the value of a gamble that has a 50% chance of giving you 100 utility and a 50% chance of giving you zero utility is just going to be 50. Multiply point five by 100 multiply point five by zero, add them together. And the utility of just a straightforward $50 is going to be 50. So according to expected utility theory, you have to be indifferent between those two choices. At least given that your utility function is linear. Now, if you're not indifferent, the expected utility theorist is going to say, well, then your utility function isn't what we thought it was.

0:15:25.4 SC: Right. And so in particular, many people would have utility function that has less utility for the second $50 than for the first.

0:15:35.1 LB: That's right. That's right. So if you're if you prefer the $50, the sure thing $50, as most people do, then the explanation, according to the expected utility theorist, has to be exactly what you said that I like the first $50 more than I like the second $50. Which means on average, that coin flip will have less utility.

0:15:57.3 SC: But the idea is that even if that's true, even if you'd rather have $50 than a 50 50 chance of zero or 100. That's no worries for expected utility theory, because it's just that utility is not the same as money. And we can still have something called utility.

0:16:13.9 LB: That's right. That's right.

0:16:16.4 SC: Is that true? Like, we're not even getting into your stuff yet. But I've always felt a little bit nervous at the idea that there's something called utility that is a number that we can just attach to different outcomes. I know that maybe we can reverse engineer such a thing. But how is that an innocent move? Or is that fraught with peril?

0:16:39.7 LB: Yeah, great question. Well, there may be two moves there. And one is the move that we can have a number that we reverse engineer from your preferences that describes how much you like something that's different from your money. The second number, the second move that people often make is something like, and utility doesn't describe anything in the head or in the world. It's simply a representation of your preferences. So it's meaningless in itself. So I tend to think the first move is innocuous. It does it describes something pretty intuitive, right? So like, how much I like money depends on how much money I already have. That's like a pretty good. It's pretty good. We want to be able to capture that with our theory of decision making maybe which coin flips a millionaire takes is going to be different from which coin flips someone in poverty takes.

0:17:42.6 LB: So that seems innocuous. The other move, and then saying, well, we don't need utility to describe anything in the head, or we don't need the numbers to sort of fit intuitively with what you think your utility function might be. Maybe that's a little bit less innocuous. So that so that kind of move would say, Okay, Sean, you prefer the $50 to the coin flip. You think you don't like additional dollars any less, the more money you have. But as it turns out, your utility function says otherwise. So you're really bad at figuring out what your utility function is.

0:18:30.0 SC: Right? I mean, maybe, maybe you just answered the question I wanted to ask. But let me ask it anyway, in case there's another angle here, which is, I worry about the circularity of it. Like, if I already had perfectly rational preferences, then I could imagine I could reverse engineer utility function to represent them. But in some sense, you're defining what it means to have perfectly rational preferences by I'm always going for the thing that has greater utility. So if real people don't have perfectly consistent preferences, what do we do? How do we even talk about this?

0:19:02.2 LB: Good. Yeah. Good. That's, I mean, that's a great question. And it's, it's a question that's received a lot of attention. An answer you don't want to give is that unless someone is perfectly rational, they don't have a utility function at all.

0:19:17.0 SC: Okay.

0:19:18.0 LB: And our theory would be pretty useful, right?

0:19:21.4 SC: Some of my best friends are pretty irrational. So yeah.

0:19:23.6 LB: But they still like things, right? So we want to still say they have a utility function. So I think a better answer is that often, many of your preferences, or many of your central preferences are pretty consistent. And you do have maybe good access to your central preferences. And then we can figure out what your more complicated preferences should be from them. For example, maybe if I ask you about a bunch of coin flips between dollar amounts, maybe that's something you have pretty good access to and can tell me about and then I can figure out your utility function from that. And then I can then use your utility function when I advise you how to play poker, or how to invest in the stock market, because those are much more complicated decisions that you don't have any intuitive preferences about.

0:20:22.6 LB: So I kind of like to think of it as there's this first step of determining your utility function from a smaller subset of your preferences.

0:20:31.9 SC: Okay.

0:20:32.8 LB: One that ones that you're pretty sure about, and if those are inconsistent, maybe you can figure out which ones you want to keep, and which ones you want to jettison. And then second step, tell you what to do in these more complicated actual decisions.

0:20:46.3 SC: Okay, that makes sense, because that's related to what I was going to worry about, like whether or not people even implicitly actually do anything like this. And maybe what you're saying is, they do something close to it for the big important straightforward decisions, and then that implies a certain way they should be acting rationally in the fringe cases, even if maybe they're not as good at that.

0:21:09.8 LB: Yeah, that's exactly right. Yeah. And I also don't want to say, I mean, I don't think people have anything as precise as the utility and probability numbers that we use in the theory. But I still think the theories are useful because they can tell us things in broad brush. So maybe we shouldn't use the theory to distinguish between how you feel about a coin flip with a 60% chance of heads and a coin flip with a 61% chance of heads. Okay, maybe you don't actually have any feelings about those two things that are different from each other. But if we can sort of, in general, draw out lessons from the theory, like here's a lesson on the classical theory. If you are an expected utility maximizer, then you're always going to want to make decisions on the basis of more information rather than less. Like that's a pretty simple principle that we have to do this complicated math to discover. But the theory is sort of useful for giving us insights like that, even in cases where the complexity of the theory outruns the actual complexity of our heads.

0:22:25.3 SC: I know that in some circumstances, in some contexts, people will make a lot of a big deal about the fact that human beings are bounded in their rationality and therefore we satisfies instead of optimize.

0:22:38.8 SC: Satisfying being some weird combination of satisfying and what is the rest of satisfies made out of?

0:22:48.9 LB: Well, that's a good question. I don't know.

0:22:52.6 SC: It's a terrible word anyway, but like, we do good enough and then we stop because it would take a lot of...

0:22:55.5 LB: Like satisfy but mathy.

0:22:57.1 SC: Yeah, exactly. But well, yeah. But, but so is that part of the being a decision theorist or a rationality theorist game that the real people like don't do all this complicated math, but they kind of approximate it in some way?

0:23:12.2 LB: Yeah, so the ultimately decision theory is about what's rational to do or what's rational to prefer, but it doesn't need to be about what's the best method to use to make decisions. So even if we think expected utility maximization is the right theory of rationality, it might be the case that you nonetheless shouldn't use it in a lot of decisions. For example, a car is coming at you, you need to swerve to the left or the right. If you bothered to calculate the expected utilities, it would take too long and you just get hit by the car, right? So it's like not a good decision making method in all circumstances. So there's a separate question of what are the best rules or heuristics that are going to help us approximate ideal rationality?

0:24:02.5 LB: Is it analogous to the issue in political theory about ideal theory versus sort of real world stuff?

0:24:09.4 SC: Like we can imagine what the perfect society is and then move toward it, but then other people are saying it's not even worth imagining the ideal society because our minds might change along the way.

0:24:19.6 LB: Yeah, that's, that's one analogy. But decision theorists really are interested in ideal rationality first and foremost. I think maybe a different analogy is the paradox of hedonism. So if you're a utilitarian, you think that, or a hedonist, you think that the best moral actions are the ones that produce the most pleasure in the world, right? Nonetheless, that wouldn't be a very good method for making decisions about what to do. You wouldn't say, oh, hello person, let's be friends because my being friends with you would maximize the amount of pleasure in the world. That would be a method to not be friends and then therefore not have a lot of pleasure in the world. So instead, there's just going to be a separate evaluation of what is good, in this case, what brings about the most pleasure in the world, and how to actually behave or think or live your life to meet that ideal standard. And similarly in decision theory, there's one question.

0:25:32.7 LB: What do rational decision makers choose? And another question, how do they go about choosing it?

0:25:41.0 SC: Okay, fair enough. And I guess there's just a couple more minor points to get on the table here before we get into risk, which is the other fun part of what you're thinking about. We use the word utility, but this is personal decision making. It's not necessarily connected to utilitarianism, which imagines that we can sort of add up people's utilities.

0:26:03.4 LB: Yeah, that's right. So the word utility in decision theory has come to mean just whatever it is you care about. So you can still be maximizing your utility, your own personal utility, even if you're perfectly altruistic. The decision theorist would just say, Oh, Sean's utility function assigns higher utility to things that are good for other people and lower utility to things that are bad for other people.

0:26:33.7 SC: Right. So it's very flexible in that way. I mean, do people ever claim there's got to be some philosopher out there who makes every possible claim, right? Are there claims that there are situations where people are acting apparently rationally that simply cannot be modeled by attaching utility function to them?

0:26:55.1 LB: So in order to attach a utility function to someone, we're going to need the utility function to have certain properties. And if it doesn't have those properties, then we can't. So one example might be if you assign infinite utility to something, then that's not going to, we're not going to at least be able to maximize expected utility or average utility. So that's like might be an example. It's perfectly rational to care only about this one thing so that it's infinitely valuable. And yet that can't be captured by expected utility. Although I suppose that's still a case of wanting to maximize utility.

0:27:39.6 SC: Yeah. And also, as soon as you start talking about infinity, I think that probably this is not a realistic kind of thing. I can't place infinite utility on anything at all. But I guess the okay, the other footnote is, we talked about how maybe the second $50 is less valuable to you than the first $50. But there's nothing in the formalism that prevents a person from being risk-seeking, right? Like they would rather have the 50-50 chance of $100 than a sure thing of 50, right? That's perfectly okay.

0:28:11.3 LB: Yep, that's right. You can prefer anything you want as long as you're consistent.

0:28:15.4 SC: And is there psychological data about where people usually fall?

0:28:20.0 LB: Well, people tend to be risk-averse when it comes to buying insurance. So you're, you know, you're saying you have a one in 100 chance of a fire in your house, you'll pay more for that insurance than is one in 100 the cost of your house. And people tend to be risk-seeking when buying lottery tickets. So if there's a lottery that has like a one in a million chance of winning a million dollars, you might nonetheless pay more than a dollar for that lottery.

0:28:50.2 SC: I guess that, yeah, that's a great example. I'm kind of ashamed I didn't think of that. So playing the lottery or many other games in Vegas or whatever are examples where we purposefully take risks even though the expected payoff in dollar terms is less than what we're paying for the right to take the risk.

0:29:08.8 SC: That's right. Yeah, that's right.

0:29:10.7 SC: Do we ever have the, you know, professional rationality conference in Las Vegas? Has that ever happened? Have we seen the philosophers go play the slot machines?

0:29:20.0 LB: There was an APA once, an American Philosophical Association in Las Vegas. And, you know, as a decision theorist, you look at those slot machines, they just look like trash cans for money.

0:29:32.5 SC: Good. So there is an example where the philosophers are doing well at their particular area of expertise. Okay. So the risky business, as it were, is the important thing here, right? I think you said it once, but let's say it again. If I get it right, in expected utility theory, all you need to know to decide what is the rational choice is what are the utilities of different outcomes and what are the probabilities of those outcomes happening, right?

0:30:04.7 LB: That's right.

0:30:07.6 SC: And you would like to say that, well, actually, let me first ask, are we any good at really evaluating probabilities? The examples are always like flipping coins and going to Vegas or whatever. But if it's like real world things, should I take this job or something like that. That sounds hard.

0:30:26.3 LB: Yeah, it's pretty hard. I think some things like we're pretty good at, you know, we get weather forecasts. They're usually pretty specific, 30% chance of rain. We're pretty good in those cases where there are possible experts that can tell us things. And usually they're phenomena that behave pretty regularly, like, you know, like weather or cards or maybe the stock market doesn't behave super regularly, but a little more regularly than my job prospects behave or something like that.

0:31:02.7 SC: Okay. Okay. So then, but your big argument in the book and elsewhere is that maybe this minimal set of ingredients of knowing what all the options are with their utilities and the probabilities of them happening is not all that we can or should take into account. And the examples from your book, I know I had a friend, you know, notice that I was reading your book and picked it up and like, this is very technical. Some of the examples get very complicated, but is there a simple way to ease us in to this discussion?

0:31:38.4 SC: Yeah. Yeah. So here's a simple way to ease us in. Here's something you might care about. You might put more weight on worst case scenarios than better case scenarios in the sense that you might, whatever happens in worst case scenarios might like play a much larger role in your decision making than what happens in best case scenarios. If you do that, then there isn't going to be a utility function and a probability function that can capture your decision making. So the expected utility theory is going to say you're irrational.

0:32:17.3 SC: Okay.

0:32:18.5 LB: Do you want to talk about some examples?

0:32:21.4 SC: We got to do some examples because I would think when you say that, well, isn't it saying that we worry about the worst possible outcome just to say that we put a big negative utility on the worst possible outcome?

0:32:33.0 LB: So if we did that in one decision, then that's what expected utility could say. But if in every decision you care more about the worst case than the best case, there isn't going to be a utility function that can capture you.

0:32:49.0 SC: Okay. Okay. Good. So maybe an example would help make that clear.

0:32:52.7 LB: Okay. So there are two examples I like. One is called the Olay paradox. Maybe we could talk about both of them and see if you find either of them compelling.

0:33:03.2 SC: Great.

0:33:05.9 LB: Okay. So may I ask you, Sean, for some of your preferences?

0:33:09.2 SC: Please.

0:33:11.6 LB: Okay. So here's two gambles. You can have either one. The first gamble, a 10% chance of $5 million, and the second gamble, an 11% chance of $1 million.

0:33:24.9 SC: I'll take the 10% chance of $5 million.

0:33:29.2 LB: You'll take the 10% chance of $5 million. Okay.

0:33:29.4 SC: Wait. Sorry. Yeah. So say it again to make sure.

0:33:32.6 LB: Yeah. So a 10% chance of $5 million or an 11% chance of $1 million.

0:33:39.3 SC: Yeah. I'm just going to be the rational person and take the 10% chance of $5 million.

0:33:42.0 LB: Sounds great. Most people share your preference.

0:33:43.0 SC: Yeah. I'm not weird that way.

0:33:45.1 LB: Yeah. Okay. Here's the second choice. Forget that first choice. Now, this one is slightly more complicated, so you might have to listen hard.

0:33:54.1 SC: Okay. I might write it down. Yeah.

0:33:56.7 LB: Yeah. Write it down. Okay. So an 89% chance of $1 million and a 10% chance of $5 million and a 1% chance of nothing. So that's like an entire gamble. 1% chance of nothing, 89% chance of $1 million, 10% chance of $5 million.

0:34:15.0 SC: So probably I'm going to get $1. Maybe I'll get $5. Tiny chance I'll get nothing.

0:34:19.7 LB: Yeah. Millions of dollars. These are millions of dollars.

0:34:22.1 SC: Millions of dollars. Does it matter? Do the units matter? Oh, actually, that's an interesting question.

0:34:28.2 SC: It is. Yeah. Okay.

0:34:29.5 LB: Yeah. We can think about that but.

0:34:31.8 SC: Millions of dollars. Good. That's motivating me.

0:34:33.7 LB: Millions of dollars. Okay. So that's one option. Second option, $1 million for sure.

0:34:41.3 SC: Okay.

0:34:42.4 LB: So which of those two would you rather have? That complicated gamble that's probably going to get you $1 million, has a 1% chance of getting you nothing, and a somewhat chance of getting you $5 million? Or would you rather me just give you $1 million?

0:34:58.5 SC: So just to talk it through, I think the answer is I would like the first one better. Okay. And I think that's probably the answer you want me to give. But you don't care. This is not a normative discussion.

0:35:12.2 SC: I just want to get to know you better, Sean. That's all.

0:35:13.3 SC: Yeah. This is a weird dating site, asking people about their rational preferences under conditions of risk. So the first option says, again, most likely $1 million, but a non-negligible chance of $5 million and a tiny chance that I get nothing at all versus 100% chance of $1 million. And I think that the reasoning would be that I'm willing to risk the 1% chance of nothing to get the 10% chance of $5 million. Yes.

0:35:44.0 LB: So that's your actual preference? Yeah. You'd rather have that one?

0:35:47.6 SC: I'd rather have the first option with 89 and one. Yeah.

0:35:50.3 LB: Okay. Well, most people would rather have the second option.

0:35:55.6 SC: Really?

0:35:56.4 LB: Most people would rather just have $1 million than to risk getting nothing in exchange for maybe getting $5 million.

0:36:04.4 SC: So just to make it just a, I want to keep saying it out loud so everyone follows along. Sure, sure. So in the mixed probability one, there is a 99% chance that I get at least $1 million. And within that, there's some chance it's $5 million, and I'm risking the 1% chance of getting nothing. And I do know, as someone who's played poker, that 1% chances do happen. So my reasoning is not that 1% equals zero, because that's certainly false. But I think I'd be willing to take that much risk.

0:36:34.4 LB: Okay. Yeah. Okay. Well, that's interesting. Most people are unlike you in that respect. Okay. And so your preferences actually can be captured by expected utility maximization.

0:36:46.4 SC: Okay.

0:36:47.3 LB: But if you have the standard preference, here's sort of how the reasoning goes. If I'm choosing between a 10% chance of $5 million and an 11% chance of $1 million, that first choice I gave you, I will probably get nothing. The minimum value of that is nothing. So close whether I'll get $5 million or $1 million. So go for the bigger option. But in the second choice I gave you, one of the options gave me $1 million no matter what. The worst I can do in that gamble is get $1 million. And so if you're the kind of person that weighs worst cases heavily, you don't even have to weigh them that heavily.

0:37:35.3 SC: I see. Yeah.

0:37:35.9 LB: You just have to weigh them sort of more than you weigh best cases. Then your expected utility maximization isn't going to be able to capture you because there's no utility function that allows for that pattern of preference.

0:37:52.0 SC: Good. So maybe said in another way, there is a rational way of behaving under which the introduction of even a tiny chance of a super bad outcome weighs very, very heavily in your decision process.

0:38:06.5 LB: Yeah. Yep. That's right. That's right. And by the way you mentioned poker, and I just want to point out that your reasoning in poker might be a little, maybe it should be a little different in this case than in poker because when I play poker, I'm going to be playing a lot of hands. So whatever losses I make are going to be made up by gains if I just pick the thing that does best on average.

0:38:31.2 SC: Sure.

0:38:33.9 LB: On the other hand, if I'm just going to make a choice once, like these gambles, I'm only going to offer them to you once, say, or the choice of which job to go for, the choice of who to marry, we can't employ the reasoning that says, look, if this were to be repeated many times, I'd do better on average by taking this thing because it's not going to be repeated many times.

0:38:58.7 SC: I think it's a very, very good point. Even within poker, there's a difference between playing just at a table versus playing in a tournament, right, because playing at a table, for exactly the reasons you just said, you in every hand want to maximize your expected return, but that strategy.

0:39:18.4 SC: Might not maximize the chance that you win a tournament because if you bust out, you can't win, right? You can't climb back in.

0:39:23.9 LB: Yeah, yeah, yeah.

0:39:25.6 SC: And I also, I appreciate the idea that if I were to be given these two options and I got unlucky and I had to come home and tell my spouse that I just won zero dollars even though I could have won a million dollars, that would be a difficult conversation to have.

0:39:39.1 LB: That would be a difficult conversation to have, yes. Which is why this is a really good dating question.

0:39:43.3 SC: Right, exactly.

0:39:44.2 LB: You've got to know if you're on the same page.

0:39:45.7 SC: Make sure you're compatible about these kinds of things.

0:39:48.1 LB: Exactly, exactly.

0:39:50.4 SC: Okay.

0:39:51.3 LB: Can I give you one more example?

0:39:51.4 SC: Please, yeah.

0:39:52.3 SC: Okay. So, and the away paradox, that's the example I just mentioned, that involved large amounts of money and small probabilities. So we might think maybe people are getting confused by the big numbers or something. So here's an example that doesn't involve large amount of, doesn't involve values like that. And this is an example due to Matthew Rabin. So let me just ask you, I'm just going to give you a coin flip between losing $100 and gaining $110. Okay. Do you want that coin flip or not?

0:40:31.4 SC: No.

0:40:32.4 LB: No. Okay. Most people don't want that coin flip. Now let me offer you a coin flip between losing $1,000 and gaining $6 million.

0:40:45.2 SC: I'll take that one.

0:40:46.9 SC: You'll take that one. Good. Most people would. In fact, this is another example of a preference that expected utility can't handle.

0:40:57.9 SC: Really?

0:40:58.0 LB: And the basic idea, yeah, the basic idea is that if you don't like that initial coin flip between losing $100 and gaining $110, then if you're an expected utility maximizer, your utility function has to be pretty concave. It has to say you like $110 less than losing, you like the interval between $110 and zero. That matters less than the interval between zero and negative $100. And if your utility function is already that concave in that low amounts of money, then once you get to losing $1,000, that's actually going to be more valuable than gaining any amount of money whatsoever.

0:41:51.4 SC: I see. Okay.

0:41:53.1 LB: So the idea is if we start in order to explain these, a small amount of risk aversion in low numbers, we have to imply an implausibly large amount of risk aversion in large numbers.

0:42:09.6 SC: So this sounds like a result that is a little mathy, and maybe you're hiding some of the math from us, but we'll trust you that it's a result. But the, I bet I could wriggle out of it if you gave me arbitrary freedom in my utility function. And presumably by utility function you mean assigning different amounts of utility to these gains and losses in money.

0:42:30.8 LB: The only restriction on your utility function is that it has to be, well, using a math term, everywhere differentiable. Basically, it has to be smooth.

0:42:40.1 SC: Really? That's all?

0:42:42.0 LB: Yep, yep, yep, yep. And the sort of like intuitive idea is that if you don't like this initial coin flip, it has just a little bit of gain, a little bit more gain than loss, then you really don't value positive amounts of money very much.

0:43:13.2 SC: But I'm surprised that I'm not allowed to not value $110 very much, but value $6 million a lot.

0:43:24.6 LB: Right. Well, so. Oh, sorry. I guess the other, the assumption is that your utility function is going to be concave. So if you had a utility function that was increased very slightly up to $100,000, then like shot in the air, then you'd be allowed to do that.

0:43:46.3 SC: Okay, good. All right.

0:43:46.4 LB: But we might think if we sort of asked you and the actual result is that you don't like this coin flip regardless of your initial wealth level, then you don't like the losing $1,000, gaining any amount of money whatsoever at your current level.

0:44:12.2 SC: So this seems to be even more vividly than the first one. As you said, a real challenge for expected utility theory. So just to say it once again, given the choice between losing $100 and gaining $110 on a coin flip, most people will not want the coin flip at all because there's barely a difference and you don't want to lose $100. Given the coin flip between losing $1,000 and gaining $6 million, most people would willingly take the risk of losing $1,000.

0:44:39.6 LB: That's right.

0:44:40.8 SC: To gain the $6 million at a 50% chance, but there's no way to accommodate that within expected utility theory under certain reasonable assumptions. Yeah.

0:44:48.3 LB: That's right. And can I give you one more example?

0:44:50.8 SC: Yes, of course. I love the examples.

0:44:52.4 LB: It's all a lot of them are pretty mathematical. This is an example that is just supposed to illustrate that there are two different psychological phenomenon, so maybe we want to capture them in our theory of rationality. Okay, so you really like coffee. In fact, if you sort of think about it, you think, I like the second cup just as much as the first. I like the third cup just as much as the second.

0:45:26.1 LB: The fourth cup just as much as the third. You're a coffee addict. I, on the other hand, can't tolerate that much coffee. Once I've had one cup, I don't want another cup. Now you and I, so you and I have different attitudes towards coffee, but we also have different attitudes towards gambles. So for you, you think the possibility of not doing so well is not going to make up for the possibility of doing pretty well. So you weigh worst-case scenarios really heavily. Okay. Me, on the other hand, I weigh all scenarios the same. So let's think about each of our preferences between one cup of coffee on the one hand and a coin flip between no cups of coffee and two cups of coffee.

0:46:13.2 SC: Sure.

0:46:13.8 LB: Well, I don't want any more than one cup, so I'm going to take one cup of coffee.

0:46:17.3 SC: Guaranteed, yeah.

0:46:19.8 LB: You like each cup just as much as the last, but you don't like taking risks. You weigh the possibility of not getting anything more heavily than the possibility of getting the two cups. So you also prefer one cup of coffee. Now because we have the same preferences, expected utility theory is going to have to say we have the same utility functions.

0:46:52.0 LB: But there are two really different psychological explanations for why we chose why we did, why we chose how we did. And you might think we'd like a theory that captures the differences between our two preferences.

0:47:00.7 SC: I would like that theory. So what is that theory?

0:47:04.7 SC: So that theory is instead of maximizing the average utility, you maximize a weighted average where if you give more weight to the bottom portion of your probability distribution than the top portion, I call you risk avoidant. And if you give more weight to what goes on at the top portion, I call you risk inclined.

0:47:35.9 SC: So in other words, in addition to just having utilities for different outcomes and probabilities of getting them, we also each have a risk aversion curve.

0:47:49.5 SC: That's exactly right. And it's convex if you're risk averse. So as you're thinking about what happens in worse and worse cases, you care about it more and more.

0:47:55.2 LB: And it's concave in the case of risk inclination. So as you're thinking about what happens in better and better scenarios, you care about them proportionally more and more.

0:48:11.1 SC: Okay. And you don't care, they're both very... You can fit them both into the theory.

0:48:16.5 LB: That's exactly right. And I think both people are behaving fully rationally.

0:48:16.6 SC: Right. I guess so that's the big thing. So there might be other people, philosophers who remain nameless, I guess, who will say, no, these people who aren't following what we think are their utility functions are just irrational. And you're saying that, no, they just have different attitudes towards risk. But under the idea that being rational is just being consistent in your preferences, they're still perfectly rational.

0:48:40.3 LB: That's exactly right. Okay, good. And so what is the I mean, maybe maybe put some more meat on these bones. Can you apply that idea to the examples or some of the examples that we're doing?

0:48:52.8 LB: Sure, sure. So in the example of the first example with the million dollars and the five million dollars, you are the standard person, not you, because you didn't have the standard preferences, but the standard person assigns more weight to the bottom 1% of the distribution than the top 1% of the distribution. So the fact that you might get zero dollars is that gets weight maybe, you know, 0.2 instead of, 0.02 instead of 0.01. In the example with the coin flips, you maybe most people would assign weight three quarters to the bottom to the worst option, and one quarter to the better option, given that they both have a 50% chance of happening.

0:50:02.9 LB: So even if you had a linear utility function, if you're assigning weight three quarters to losing $100 and weight one quarter to gaining $110, that's going to work out to be a negative number. So less than zero, I.e. Doing nothing.

0:50:14.7 SC: Okay, good.

0:50:15.4 LB: And then in a coffee cup example, your utility function of coffee cups was maybe utility of X coffee cups equals X. My utility function of coffee cups was maybe utility of zero coffee cups equals zero utility of one or more coffee cups equals one. My risk function was linear. So all states get weight equal to their probabilities, whereas your risk function assigned, you know, maybe two thirds weight to the bottom 50% of the distribution.

0:51:04.3 SC: Okay, I think I am seeing it. Let me modify the example of the 1 million, five million, nothing, because I think maybe this is not a realistic choice, but I think it helps me understand why a typical expected utility person would get in trouble with this. I could easily imagine if you said there's a coin flip and you get either nothing or let's say $20 million. Coins up between nothing and $20 million or guaranteed $1 million. I think I would take the $1 million guaranteed because I am risk averse even though, because it's not even about risk aversion in that case. It's about my utility function, right? Like the $20 million dollars.

0:51:49.5 SC: Is not as valuable. But if you said to me 98% chance of a million, 1% chance of nothing and 1% chance of 20 million, I might prefer that to a guaranteed million because my thought process would be like, you know, probably I'm getting a million, but I kind of like the idea of having a little bit of a chance of getting $20 million. And so basically it's exactly the same comparison, but with an extra 98% chance of a million dollars guaranteed.

0:52:22.4 SC: And so if I do prefer one example, then I should be consistent in my preferences according to expected utility. And I'm not allowed according to expected utility theory to just accept the chance of a loss because I like the thrilling chance of a 1% huge gain, but not have the same thing when it's 50-50.

0:52:41.4 LB: That's exactly right. Yeah, that's exactly right. And the preference you just described yourself as having is a risk inclined preference, by the way. So you care more about what happens at the top 1% of the distribution than at the bottom 1% of the distribution.

0:52:58.2 SC: But that's compatible with me if I were just given 50% chance of nothing, 50% chance of 20 million, not preferring that to 100% chance of 1 million.

0:53:08.4 LB: That's right. On my view.

0:53:08.5 SC: On your view. Okay.

0:53:08.6 LB: Yeah, yeah, yeah.

0:53:08.7 SC: Good.

0:53:08.8 LB: That's right.

0:53:08.9 SC: So I guess as a physicist, I would worry that there's a curve fitting going on here. You've added a whole new parameter to our model of rationalities. You're able to fit more circumstances, but there's a famous line about how if you give me five parameters, I can fit an elephant or something like that. Is there a worry that we're just giving people too much freedom or is that just the real world? We're just being realistic.

0:53:34.8 LB: Well, this is why for me it's important to really try to figure out what we're capturing with these parameters. So yeah, you're absolutely right. If you gave me more parameters, capture more preferences. Great. But what I'm interested in is whether these parameters sort of capture what it is to reason instrumentally, which means reason about taking the means to your ends. And I think that choice involves not just taking an average. That's one way you might take the means to your ends, but importantly, it's actually open to you what to do. It's open to you how much weight to put on worst cases or better cases.

0:54:28.3 SC: Right. Okay.

0:54:31.4 LB: Yeah. So in that sense, data fitting isn't the ultimate standard here, including because it might be that people are irrational. So this is a theory of rationality. So it's probably not going to fit the data. So our data here, so to speak, is more philosopher's data about what we should and shouldn't be allowed to prefer and what could count as reasoning or preferences that are taking the means to your ends.

0:55:11.1 SC: Okay. Okay.

0:55:12.7 LB: And maybe another way to think about it, if it's maybe hard to conceptualize probabilities, you can instead think about each possible state of the world as representing one of your future possible selves.

0:55:39.7 LB: It's like there are a hundred future possible Seans. What would you rather giving all the future possible Seans a million dollars or giving 98 of them a million dollars and giving one of them nothing and one of them $20 million? And whereas the expected utility theorists will say there's a unique answer to that question about how Sean should value his future possible Seans. You should give them all equal weight in decision making. I say, no, actually it's up to you. If you want to put more weight on how things go for worst off possible Sean, that's a reasonable way to take the means to your ends. That's a reasonable way to sort of like cash out the maximum of I'm trying to get what I want. On the other hand, if you, as I guess you do put a lot of weight on best off future possible Sean, that's also a reasonable thing to do. In either case, you only have one life to live. Only one of these guys is going to be actual Sean. So it's up to you to think about how much weight to put on each of their interests knowing that only one of them will be actual.

0:57:01.0 SC: You know, it only now dawns on me, this is very embarrassing, but I have to think about this in the context of the many worlds interpretation of quantum mechanics, which I'm kind of a proponent of. So the whole point of many worlds is that what we think of as probabilities really are actualities well, quantum probabilities, not every old probabilities. But if we did our choice making via some quantum random number generator, then yeah, I've always taken the line, this might be a life changing moment for me because I've always taken the line that there's no difference in how we think ethically or morally in many worlds versus just a truly stochastic single world.

0:57:44.7 SC: But you're saying, well, my attitude towards risk aversion could in principle change that.

0:57:45.2 LB: Yeah, that might be right. Yeah. So if you think all possible Seans are actual Seans, that's the view, right?

0:57:54.4 SC: Yep.

0:58:03.0 LB: Then actually you're facing the problem of distributed justice. So you should decide as you might decide or plausible that you should decide as you might decide when you're just thinking about how much to weigh the interests of each of a hundred people. And we have a similar question in that space. Should I care about the average person, how things go on average, or should I give more weight to the people that are worst off in virtue of the fact that they're the worst off? If you think the latter thing and you have the physics interpretation you just explained, then maybe you should be risk avoidant.

0:58:39.9 SC: All right. I'm going to have to think about this. I'm going to have to sleep on this before I actually have an epiphany here. But I was just actually, I don't know if you're on Twitter, but I was just actually arguing with David Papineau on Twitter about exactly this because he was trying to say that there are moral implications of many worlds and I was trying to say, no, there's not.

0:58:58.0 SC: So I might have to back down.

0:58:58.1 LB: Well, there is one, and can I mention, there is one difference between the case that maybe if you don't want to rethink your entire life, you might be happy to hear about. So in the case of you distributing over all the possible Seans, they're all you. So you might think that you're allowed to follow a different ethical principle when they're all you than you are when they're all strangers.

0:59:30.5 SC: Yeah, but I think that to be consistent in many worlds, they're all future versions of me, but once they come into existence, they're all separate people.

0:59:34.8 LB: I see. Okay. I see.

0:59:35.4 SC: Yeah. They have to be treated separately. You reject the idea that.

0:59:38.2 SC: They're not all me.

0:59:41.4 LB: Okay. Well then.

0:59:41.5 SC: I know.

0:59:41.6 LB: Yeah, you might just have to.

0:59:42.1 SC: Up the creek.

0:59:42.5 LB: While they rethink things.

0:59:45.0 SC: Well, all of the words that you have been saying right now, I mean, it's clear there's this relationship here between a slightly different context, which is some kind of Rawlsian theory of justice, right? So John Rawls famously imagines that we try to put ourselves in an original position where we don't know who we are in the world and ask, how should we arrange the world in the way that we'd be happiest with? And he ends up with a very, what might be called a very risk averse suggestion that we want to arrange things to the benefit of the worst off people. So is that a real connection or is that just a rhetorical similarity?

1:00:23.1 LB: That is a real connection. So there are actually two people writing in this area who ended up with different views and one is John Rawls and one is John Harsanyi.

1:00:42.3 LB: So they both agree that when we're thinking about what's best for society, for example, making a simple choice, should everybody have $50 or should half our people have nothing and half our people have $100? They both thought making that decision is the same as making the decision which society would you rather live in if you didn't know ahead of time who you were going to be. But they had really different views about how you would make the decision. So Harsanyi thought, well, you assign equal probability to being each person and you maximize expected utility, therefore both the societies are equally good. You're indifferent between being an arbitrary member of each society, therefore each society is just as good as the other. Rawls, on the other hand, thought, I can't assign probabilities to my being any particular person in this society and when I can't assign probabilities, I use a maximally risk averse or pessimistic or ambiguity averse rule, namely I pick the society or I pick the choice with the highest minimum. I maxim in. Therefore he would rather live in the society, he would rather be an arbitrary member of the society in which everyone got $50 so he thinks that's the moral choice.

1:02:13.7 LB: Now, if you'll notice, that's a really, really risk averse view. That's one that turns only about the worst off person. The second worst off person, we don't care about how things go for him at all. In terms of our society, that might be like caring only about the poor, not giving any weight whatsoever to the middle class. That doesn't seem good. That seems like way too concerned with one part of the distribution. Harsani, on the other hand, who derives that we should care about the average member of society, that's not sensitive to how our utility is distributed as well. You might think that's really bad. It would mean a society with really poorly off people and really well off people is just as good as a society in which everybody is sort of in the middle. My view allows for an intermediate position between these two guys because I think in the original position, if I don't know which member of society I'm going to be, I should use a risk avoidant decision theory. I should give more weight to worst case scenarios, but not exclusive weight to worst case scenarios. Maybe if they're just three people in my society, the worst guy, the middle guy, and the best off guy, I should give, I don't know, weight, two-thirds to the worst off guy, weight just slightly less than one-third to the middle off guy, and just a little bit of weight to the best off guy.

1:03:54.3 LB: This is going to have the result that I care a lot about how things go for the worst off, and still about how things go for the second worst off. I should care a lot about the poor, but should also care significantly about the middle class. And even we should care a little bit about the very rich, though much less than we should care about the other classes. I think this is a really good result. I think it fits with how many of us in fact think about the morality of distributing resources across society, and I think it allows us to give an answer that's sort of sensitive both to the total amount of wealth in a society, but also to how it's distributed.

1:04:41.2 SC: And kind of compatible with the idea that at the personal utility level, I would rank $1 billion more than 50-50 chance of $2 billion versus zero, right? I'd rather, the second billion doesn't count as much to me, and similarly, you're saying the happiness of the billionaires in society counts less to you than the happiness of the less well off people.

1:04:58.6 LB: That's right. That's right.

1:05:00.5 SC: But they both count.

1:05:00.7 LB: Yeah, that's right. And it's not just that they count less because money can buy them less things. It's that even taking into account how much well-being each bit of money can give you, we should care more about the worst off.

1:05:17.5 SC: And one worry about...

1:05:19.9 LB: Oh, sorry go a head.

1:05:21.7 SC: One worry about Rawls's theory is that it's literally the single worst off person who is running things. So even if you have 100 million very happy middle class people, if there's one person who's really suffering, that's no good in his theory, and you're happy.

1:05:43.2 SC: You're much more willing to put up with that in some sense.

1:05:43.3 LB: I'm willing to put up. My theory doesn't have the result.

1:05:46.4 SC: That's right. Yeah.

1:05:48.1 LB: So you're saying I'm, yes, I'm willing to put up with some one person being a lot more poorly off if it gets a sufficient number of people a sufficiently amount better off.

1:06:08.7 SC: On the one hand, has anything like Rawls's criterion ever come close to being implemented in any world? Have we ever had a society that really tried to maximize things for the worst off person anyway?

1:06:22.3 LB: No. Pretty much not. Okay. But again, you've introduced sort of a new degree of freedom, right? So you're saying, yes, I want to have a distribution that is skewed in some way about caring about people's welfare.

1:06:36.8 SC: What's the right one to use? Is there a way to answer that?

1:06:36.9 LB: Good question. So building on the analogy between individual decision making and societal well-being as Harsani and Rawls do, I want to think about what principles should we use when making decisions for other people? And that'll hopefully help us figure out what we should do in this case. So I think when making decisions for other people, we are as risk averse as it's reasonable to be. So we have these sort of ideas like don't take a risk for someone else unless you know they approve it or something like that. So if we can think about how risk averse it's reasonable to be, in other words, how much weight could we reasonably put on, say, the bottom 10% of our distribution, whatever number that is, we're going to end up putting that much weight on the worst off 10% in society.

1:07:41.4 SC: Okay. So it does kind of fit together nicely is the idea.

1:07:41.6 LB: Yeah. Yeah, exactly.

1:07:43.0 SC: And since you published this? Does everyone say, oh, yes, she's completely right. We agree. Or are there still critics out there who think you're...

1:07:53.2 LB: Absolutely. Everyone thinks I'm right.

1:07:53.3 SC: That's good. Okay. If we could imagine hypothetically, someone might want to disagree. Like what would the criticism be?

1:07:58.8 LB: Yeah. So are you thinking of there are different kinds of criticisms, both for the decision theory and for the social, the theory about societal distribution?

1:08:08.6 SC: For the decision theory.

1:08:10.1 LB: For the decision theory. Yeah. So a lot of the way this kind of debate goes is that there are these popular arguments in favor of expected utility maximization. And I try to show none of them work. They just end up with some weaker principle. Like I told you, the argument about how things would go if we were to repeat the decision over and over again. I think that's not a good argument for expected utility maximization, because we're not always going to repeat decisions over and over again. So all the arguments against my view are sort of of that vein. And one argument, which I think is pretty interesting, is about how people who are risk avoidant in my sense will behave across time and how they'll respond to information. So I mentioned earlier in the podcast that if you're an expected utility maximizer, you're always going to want to make decisions on the basis of more information rather than less. But if you're a risk weighted expected utility maximizer, which is what I call people who follow my view, then sometimes you're going to prefer to make a decision with the information you have rather than to get more information.

1:09:35.3 LB: And some people think that shows that risk weighted expected utility maximizers are irrational.

1:09:44.9 SC: Irrational, yeah.

1:09:46.1 LB: Well, it's just a theorem of rationality that...

1:09:54.3 SC: More information.

1:09:55.3 LB: You should always make decisions on more information rather than less.

1:10:04.7 SC: Can you give an example of where more information I mean is it more information is just not helpful or is it actively harmful?

1:10:12.9 LB: Well, the basic idea is that when you're getting more information, it's often not information directly that tells you what's true in the case of your decision. So for example, if you were thinking about taking a coin flip and you could get information about whether the coin would land heads or tails, that's always going to be good. But there's going to be some cases in which information will talk you out of taking a risk that would have been a good risk to take. And those are the examples in which the risk avoidant person is not going to do better with more information. So in particular, you already have a high probability that the course of action you're thinking about will work and you're considering getting information that might put you into a state where you're like 50-50.

1:11:14.0 LB: About whether it will work. You're not sure it'll work. You're not sure it won't work. Those are going to be the kinds of cases in which the risk avoidant person isn't going to want more information.

1:11:25.6 SC: But your attitude is that there's no incompatibility between not wanting more information and still being rational.

1:11:31.7 LB: That's right. Yeah, that's right. My attitude is it's sort of the more basic question is how should we take risk into account in decision making? And then it's going to be an interesting upshot of our theory, of the true theory, whether information is always good or whether it's sometimes bad. Whereas my critics might say, no, it's part of the data that information should always be good.

1:11:55.8 LB: So however we take risk into account, we want our theory about that to respect this data.

1:12:07.8 SC: So maybe for the final question, has thinking about risk and rationality in this way changed your own way of thinking about risks? Or should it change anyone's way? Or is it merely a way of sort of reassuring yourselves that we're being rational?

1:12:17.5 LB: Yeah, great question. So two things I want to say, and one is how we should think about ourselves and one is how we should think about other people. So I got into decision theory, as I mentioned, because I'm really bad at making decisions. I hope there would be some mathematical theory that would just tell me what to do.

1:12:41.9 SC: Straighten everything out.

1:12:42.5 LB: Follow that. After exploring the question of how to take risk into account, it turns out that even more is up to me than I thought. So the sort of upshot of that is that there's this additional thing that I have to think hard about, which is do I want to be the kind of person that sort of like does pretty well no matter what? Or do I want to be the kind of person that goes for things where things might turn out amazing but might turn out horribly? That's something you might have to think about when you choose a job. It's maybe something you have to think about when you choose a romantic partner or a place to live.

1:13:35.6 LB: So the upshot is like that's going to be a feature of you that you have to set. And there's like no right and wrong way to set it. The romantic that's like, as long as there's a chance, I want to go for it, is doing something fine. And then the sort of cautious person that's like, no, no, no, I just want to make sure things go better no matter what, is also doing something fine. So that's the sort of personal upshot. The upshot for how we think about other people is that this is like an additional fact that might vary between people. And just to put it in more concrete terms, during COVID, there were all these arguments about what restrictions we should have. For example, should we go out to restaurants or not? That was like a question people disagreed about. And one thing I noticed is that when people disagreed, they often thought the other person had really different values from them. So like, oh, you want to go out to restaurants because you don't care about grandparents. You want grandparents to die. That's why you want to go to restaurants. On the other hand, you had the person who didn't think we should have a lot of restrictions saying something like, well, you don't care about friendship.

1:14:57.2 LB: You don't care about people's livelihoods. You just want people to be lonely and not have their jobs. So they thought of each other as having different values. In decision theory terms, we would say as having different utility functions. But if you notice that people can have different attitudes towards risk that don't show up in the utility function, you could sort of think, actually, these people had shared values. Everybody doesn't want grandparents to die. Everybody cares about that. Everybody cares about each other's work, each other's relationships, and so forth. It's just that one group of people was weighing worse scenarios a lot more heavily than the other. So in fact, they were both being rational in their preferences, and they were both responding to the right values. It's just that they had these different risk attitudes. And now maybe it's a separate question of given that people have different risk attitudes, what should we do in making rules for all of us? But the first thing is to realize that somebody might want to do things differently than you in one of these risky cases without having different values from you.

1:16:15.5 SC: Is this like one of the big five personality traits or something like that? Is it...

1:16:18.1 LB: Good question let's make it the big six and put risk attitude as the number six.

1:16:21.1 SC: All right that's good we have a program going forward that was very very helpful so Lara Buchak thanks so much for being on the Mindscape podcast.

1:16:27.4 LB: Thanks so much Sean.

6 thoughts on “220 | Lara Buchak on Risk and Rationality”

  1. Consistency#1
    No surprise there.
    We want to be consistent with statements we’ve made, stands we’ve taken, and actions we’ve performed.

  2. Pingback: Sean Carroll's Mindscape Podcast: Lara Buchak on Risk and Rationality - 3 Quarks Daily

  3. Hi, Sean and Lara,

    Very interesting. I especially liked the revelation about possible shared values among people with much-different ideas on how to handle the pandemic at the end of the podcast.

    Seems to me that the choice about 89% – 10% – 1% should have a total-money input into the analysis. For example, when I posed the choice to my nephew as you did on the show:
    89% $1,000,000
    10% $5,000,000
    1% zero

    vs. a certain $1,000,000,

    he immediately responded he’d take the 89/10/1 (I immediately took the certain $1M, by the way).

    I thought about it for a minute, and changed the parameters on him. New choice:
    89% $1,000,000,000
    10% $5,000,000,000
    1% zero

    Now he immediately took the certain $1 billion, saying there’s no way he could spend all that, so why not get the certain result. I took the certain measly $1 million.

    However, for me, if the numbers were
    89% $1,000
    10% $5,000
    1% zero

    I would instantly take the 89/10/1.

    Get it?

    Side note: When I initially posed the problem to my nephew, he misunderstood, and thought that there was a 1% chance that he would be **killed** instead of receiving the money! Talk about not prioritizing the least-likely outcome!!!

    /Steve Denenberg

  4. I’m wondering if she is proposing that we have essentially one risk curve that is used for all questions at least for a certain time period as I imagine one’s appetite for risk changing over time. That seems implicit here.

    Given one risk curve, it is hard to see from the examples which are somewhat narrow that it would do a better job of modeling rational decisions. In other words, has Lara or others looked at this empirically to see that with such a curve we get closer to people’s true preferences over a wider set of questions?

  5. The end bit about the pandemic really summed it up for me. We DID share values, we didn’t apply equal weights to the worst and best case scenarios. So we acted as if there was conflict. I suspect we do this a lot. This kind of understanding could really help.

    Great guest!

  6. Around 44:00 isn’t the problem with absolute vs relative?

    First Scenario – The $10 Difference.

    Lose $00 vs Gain $10
    Lose $10 vs Gain $20
    Lose $20 vs Gain $30

    Lose $1 000 000 vs Gain $1 000 010

    The early bets, especially the first 🙂 might worth taking – the last bet, maybe not.

    Scenario 2 – Re-Presented as a Multiplier

    Lose $1 vs Gain $1 000
    Lose $2 vs Gain $2 000

    Lose $10 vs Gain $10 000

    Lose $1 000 vs Gain $1 000 000

    I think I’d take all of those.

    I don’t know about the technicalities but this doesn’t seem that controversial, let alone impossible.

Comments are closed.

Scroll to Top