220 | Lara Buchak on Risk and Rationality

Life is rich with moments of uncertainty, where we're not exactly sure what's going to happen next. We often find ourselves in situations where we have to choose between different kinds of uncertainty; maybe one option is very likely to have a "pretty good" outcome, while another has some probability for "great" and some for "truly awful." In such circumstances, what's the rational way to choose? Is it rational to go to great lengths to avoid choices where the worst outcome is very bad? Lara Buchak argues that it is, thereby expanding and generalizing the usual rules of rational choice in conditions of risk.

Support Mindscape on Patreon.

Lara Buchak received a Ph.D. in philosophy from Princeton University. She is currently a professor of philosophy at Princeton. Her research interests include decision theory, social choice theory, epistemology, ethics, and the philosophy of religion. She was the inaugural winner of the Alvin Plantinga Prize of the American Philosophical Association. Her book Risk and Rationality proposes a new way of dealing with risk in rational-choice theory.

6 thoughts on “220 | Lara Buchak on Risk and Rationality”

  1. Consistency#1
    No surprise there.
    We want to be consistent with statements we’ve made, stands we’ve taken, and actions we’ve performed.

  2. Pingback: Sean Carroll's Mindscape Podcast: Lara Buchak on Risk and Rationality - 3 Quarks Daily

  3. Hi, Sean and Lara,

    Very interesting. I especially liked the revelation about possible shared values among people with much-different ideas on how to handle the pandemic at the end of the podcast.

    Seems to me that the choice about 89% – 10% – 1% should have a total-money input into the analysis. For example, when I posed the choice to my nephew as you did on the show:
    89% $1,000,000
    10% $5,000,000
    1% zero

    vs. a certain $1,000,000,

    he immediately responded he’d take the 89/10/1 (I immediately took the certain $1M, by the way).

    I thought about it for a minute, and changed the parameters on him. New choice:
    89% $1,000,000,000
    10% $5,000,000,000
    1% zero

    Now he immediately took the certain $1 billion, saying there’s no way he could spend all that, so why not get the certain result. I took the certain measly $1 million.

    However, for me, if the numbers were
    89% $1,000
    10% $5,000
    1% zero

    I would instantly take the 89/10/1.

    Get it?

    Side note: When I initially posed the problem to my nephew, he misunderstood, and thought that there was a 1% chance that he would be **killed** instead of receiving the money! Talk about not prioritizing the least-likely outcome!!!

    /Steve Denenberg

  4. I’m wondering if she is proposing that we have essentially one risk curve that is used for all questions at least for a certain time period as I imagine one’s appetite for risk changing over time. That seems implicit here.

    Given one risk curve, it is hard to see from the examples which are somewhat narrow that it would do a better job of modeling rational decisions. In other words, has Lara or others looked at this empirically to see that with such a curve we get closer to people’s true preferences over a wider set of questions?

  5. The end bit about the pandemic really summed it up for me. We DID share values, we didn’t apply equal weights to the worst and best case scenarios. So we acted as if there was conflict. I suspect we do this a lot. This kind of understanding could really help.

    Great guest!

  6. Around 44:00 isn’t the problem with absolute vs relative?

    First Scenario – The $10 Difference.

    Lose $00 vs Gain $10
    Lose $10 vs Gain $20
    Lose $20 vs Gain $30

    Lose $1 000 000 vs Gain $1 000 010

    The early bets, especially the first 🙂 might worth taking – the last bet, maybe not.

    Scenario 2 – Re-Presented as a Multiplier

    Lose $1 vs Gain $1 000
    Lose $2 vs Gain $2 000

    Lose $10 vs Gain $10 000

    Lose $1 000 vs Gain $1 000 000

    I think I’d take all of those.

    I don’t know about the technicalities but this doesn’t seem that controversial, let alone impossible.

Comments are closed.

Scroll to Top