285 | Nate Silver on Prediction, Risk, and Rationality

Being rational necessarily involves engagement with probability. Given two possible courses of action, it can be rational to prefer the one that could possibly result in a worse outcome, if there's also a substantial probability for an even better outcome. But one's attitude toward risk -- averse, tolerant, or even seeking -- also matters. Do we work to avoid the worse possible outcome, even if there is potential for enormous reward? Nate Silver has long thought about probability and prediction, from sports to politics to professional poker. In his his new book On The Edge: The Art of Risking Everything, Silver examines a set of traits characterizing people who welcome risks.

nate silver

Support Mindscape on Patreon.

Nate Silver received a B.A. in economics from the University of Chicago. He worked as a baseball analyst, developing the PECOTA statistical system (Player Empirical Comparison and Optimization Test Algorithm). He later founded the FiveThirtyEight political polling analysis site. His first book, The Signal and the Noise, was awarded the Phi Beta Kappa Society Book Award in Science. He is the co-host (with Maria Konnikova) of the Risky Business podcast.

6 thoughts on “285 | Nate Silver on Prediction, Risk, and Rationality”

  1. Pingback: Sean Carroll's Mindscape Podcast: Nate Silver on Prediction, Risk, and Rationality - 3 Quarks Daily

  2. EA utilitarianism is a quasi-religious cult. It’s based on faith that the EA community knows what’s best for humanity. It is replete with personal subjective value judgments which characterize certain things as having “utility” and others having less utility or value. There is no objective utility in the universe. Utility is a subjective
    concept. For a serial killers, their victims have utility only because the serial killer enjoys killing them. There is no meaningful definition of utility as utility is a concept that rel;ates to a goal and humans don’t have any universally shares goals. Nate Silver has drunk some of the EA Kool-Aid. Although he hasn’t swallowed all of it he has absorbed enough so that he believes the mythical ideas of AI doom, and seems to believe in the dawn of AGI super intelligence, an undefinable godlike omnipotent threat to the human race. AGI doesn’t exist, and no one has even been able to define it in a meaningful way. As a result, no one is working on it (even though they think they are) because they don’t know what it is or how or where to start. They believe in computational functionalism and that AGI will just magically “emerge” as AI computational powers are increased. There is not an iota of evidence for this faith.

  3. I am 100% sure that Elon Musk and Sam Bankman-Fried have had nothing but their own self-interests in mind in any causes they have supported, businesses they have started, or projects they have been involved in.

  4. It’s virtually impossible to have a meaningful discussion about ‘Prediction, Risk, and Reality’ without invoking Bayes’ Theorem, named after 18th-century British mathematician Thomas Bayes. Bayes’ Theorem is a mathematical formula for determining conditional probability. Conditional probability is the likelihood of an outcome occurring based on a previous outcome in similar circumstances. It provides a way to revise existing predictions or theories (update probabilities) given new evidence.
    The video posted below ‘The Bayesian Trap’ is a good introduction to the topic, and cautions that we must remain open minded and willing to adjust our way of thinking if we are not satisfied with the results of our actions.

    https://www.youtube.com/watch?v=R13BD8qKeTg

  5. I was catching up on some episodes and greatly enjoyed Quiggin and Acemoglu, but then had the misfortune to blunder straight into this wafer-thin garbage . Sad.
    I hope that one day you will speak to the humane and excellent Ha-Joon Chang, a man who appears to exist only to dispel bullshit economic theology unlike this self aggrandising crank.

  6. I didn’t catch all of this, but I wonder if utility is logarithmic in shape. I would not want to risk 100 utility for either 0 or 200+e. It suggests to me that utility does not scale linearly.

Comments are closed.

Scroll to Top