Non-Normalizable Probability Measures for Fun and Profit

Here’s a fun logic puzzle (see also here; originally found here). There’s a family resemblance to the Monty Hall problem, but the basic ideas are pretty distinct.

An eccentric benefactor holds two envelopes, and explains to you that they each contain money; one has two times as much cash as the other one. You are encouraged to open one, and you find $4,000 inside. Now your benefactor — who is a bit eccentric, remember — offers you a deal: you can either keep the $4,000, or you can trade for the other envelope. Which do you choose?

If you’re a tiny bit mathematically inclined, but don’t think too hard about it, it’s easy to jump to the conclusion that you should definitely switch. After all, there seems to be a 50% chance that the other envelope contains $2,000, and a 50% chance that it contains $8,000. So your expected value from switching is the average of what you will gain — ($2,000 + $8,000)/2 = $5,000 — minus the $4,000 you lose, for a net gain of $1,000. Pretty easy choice, right?

A moment’s reflection reveals a puzzle. The logic that convinces you to switch would have worked perfectly well no matter what had been in the first envelope you opened. But that original choice was complete arbitrary — you had an equal chance to choose either of the envelopes. So how could it always be right to switch after the choice was made, even though there is no Monty Hall figure who has given you new inside information?

Here’s where the non-normalizable measure comes in, as explained here and here. Think of it this way: imagine that we tweaked the setup by positing that one envelope had 100,000 times as much money as the other one. Then, upon opening the first one, you found $100,000 inside. Would you be tempted to switch?

I’m guessing you wouldn’t, for a simple reason: the two alternatives are that the other envelope contains $1 or $10,000,000,000, and they don’t seem equally likely. Eccentric or not, your benefactor is more likely to be risking one dollar as part of a crazy logic game than to be risking ten billion dollars. This seems like something of a extra-logical cop-out, but in fact it’s exactly the opposite; it takes the parameters of the problem very seriously.

The issue in this problem is that there couldn’t be a uniform distribution of probabilities for the amounts of money in the envelopes that stretches from zero to infinity. The total probability has to be normalized to one, which means that there can’t be an equal probability (no matter how small) for all possible initial values. Like it or not, you have to pick some initial probability distribution for how much money was in the envelopes — and if that distribution is finite (“normalizable”), you can extract yourself from the original puzzle.

We can make it more concrete. In the initial formulation of the problem, where one envelope has twice as much money as the other one, imagine that your assumed probability distribution is the following: it’s equally probable that the envelope with less money has any possible amount between $1 and $10,000. You see immediately that this changes the problem: namely, if you open the first envelope and find some amount between $10,001 and $20,000, you should absolutely not switch! Whereas, if you find $10,000 or less, there is a good argument for switching. But now it’s clear that you have indeed obtained new information by opening the first envelope; you can compare what was in that envelope to the assumed probability distribution. That particular probability distribution makes the point especially clear, but any well-defined choice will lead to a clear answer to the problem.

.

66 Comments

66 thoughts on “Non-Normalizable Probability Measures for Fun and Profit”

  1. Do you mean for the assumed probability distribution in your last paragraph to go from $1 to $20,000, not $1 to $10,000 as you currently state it?

  2. No; if the envelope with less money is between $1 and $10,000, the envelope with more money would be between $2 and $20,000.

  3. This sounds similar to the reasons that many rational actors shouldn’t insure against “infinite” losses.

    Take for example the present situation in the Gulf. What additional amount of money should BP have spent to avoid this gushing oil well? If there was only an extremely small chance that this well would completely destroy the Gulf and Atlantic ecosystems, there would have been no reason to spend (the inverse of switching envelopes) a tremendous amount of money just to add a little more safety because the company would not survive the lawsuits (keeping the envelop) anyway.

  4. Delightful problem!

    What you are doing is showing that the practical problem is NOT equivalent to the following theoretical problem: you have two envelopes which have a number in them. One number is X and the other one is 2X. I offer you Y dollars right now. If you switch and you get the envelope with X, you get Y/2 dollars. Else if you switch and get 2X, you get 2Y dollars. You have no way of knowing if you have envelope X or 2X.

  5. I can’t decide if I should say “$10,002” or just accept that between is exclusive and inclusive in this case.

  6. I just thought it about it a bit more: this is really a conditional probability problem isn’t it? The question really is “what is the probability that I have the larger amount given that the upper bound is approximately X dollars”, isn’t it?

  7. Pingback: 17 May 2010 (noonish) « blueollie

  8. I always loved that problem. I came across it in high school, and couldn’t get over the fact that it was so clearly and obviously better to switch, but equally clear and obvious that it couldn’t matter. I didn’t figure it out until I decided I would write a little computer program to try it a bunch of times and figure out which was right. The instant I started working on the simulation, it became obvious 🙂

    It is also a great example of how people approach problems and especially disagreements. I’ve given the problem to groups of people before, and it’s amazing how hard it is to get people to refute one argument once they’ve settled on the second. Once people take sides, it can be incredibly hard to make them address the counter argument, other than by saying “I’m right, and so you must be wrong”. But of course, when both sides are “equally” right, you get people just repeating what they’ve said and totally shutting out the counter argument.

  9. I posted the following at the original blog:

    You have to identify the random events first.

    I have two envelopes: red and blue and I randomly pick one envelope to put the higher amount in. This is the random event that happens first. (Let’s say I picked blue.)

    Secondly, you pick one envelope at random, say, red. This is the second and *final* random event.

    There are *no more random events*. No matter how much you’d like to believe that “the other envelope has X% chance”, it does not since money doesn’t teleport between envelopes while you keep changing your mind. Everything’s been already decided.

  10. Reminds me of a problem Randall Munroe posted a while back, with intriguing results. I think it resolves Ollie’s first formulation of the problem as well. It doesn’t yet sit comfortably with me, but I get the idea and other people seem plenty confident that it works.

    Randall’s post:

    This cool puzzle (and solution) comes from my friend Mike.

    Alice secretly picks two different real numbers by an unknown process and puts them in two (abstract) envelopes. Bob chooses one of the two envelopes randomly (with a fair coin toss), and shows you the number in that envelope. You must now guess whether the number in the other, closed envelope is larger or smaller than the one you’ve seen.

    Is there a strategy which gives you a better than 50% chance of guessing correctly, no matter what procedure Alice used to pick her numbers?

    I initially thought there wasn’t, or that the problem was paradoxically defined, but it turns out that it’s perfectly valid and the answer is “Yes.” See my first comment for an example of one such winning strategy.

  11. Suppose the problem states that the amount of money given away will not exceed N. For all cases in which the first envelope contains an amount greater than N/2 the solution is trivial, but if the first envelope contains n, n < N/2 and the amounts are randomly determined (but constrained by N) it seems that you always benefit by switching, the original paradox. Have I completely missed the point?

    edit: I see the point now… I did miss it the first time:p

  12. That’s funny that you post this. We were just having a workshop over the weekend on Eternal Inflation and was arguing about this!

  13. Sean & Koray —

    Isn’t it easier to follow Keynes and to think of this problem as relating to “the weight of the evidence,” under uncertainty, rather than as a conventional “probability” problem? I don’t see how hypothesizing a “probability distribution” before you open the first envelope makes anything easier.

    All you want to know is whether, upon seeing the amount of money in the first envelope, you have any probative evidence — or are, instead, just as ignorant (uncertain) as you were before. In order to think this through, you *don’t need to “pick an initial probability distribution” for all amounts of money.* You can just count the money in the first envelope, and *then* ask yourself whether you have a good reason to favor (or disfavor) the possibility that the other envelope contains 2x that amount. That’s the only bet that matters! There is no other relevant probability, and I submit that the “initial distribution” is both total speculation and window dressing for the correct solution to the puzzle.

  14. And I think I agree with BK’s comment on Cody’s puzzle theorem:

    “…. the [proposed solution] implicitly assumes that A and B are independently drawn. The logistic distribution assumes zero is the halfway point, and that values closer to plus or minus infinity are less likely than values near zero. We assume that 10 is less likely than nine (i.e. that Alice isn’t being human). These are all probably OK, but add structure that was not stated in the problem. Given what was written, the only not-incorrect distributional assumption would be a neutral prior, like the improper uniform distribution (p(x)=1 for all x), where the CDF is hopelessly undefined, and we’re back to not having a way to guess which envelope has the bigger value. Instead, you’ve made a number of assumptions in the logistic distribution, and then used those assumptions to assign subjective probabilities. That’s a fine human way to approach the problem, but works exactly to the extent that you believe the assumptions.”

  15. There are some interesting variations on the two envelope problem which are not so easily resolved. See this post of Tim Gowers (and comments) for a discussion: http://gowers.wordpress.com/2008/02/03/probability-paradox-ii/

    Here’s a summary:

    Let there be two envelopes with amount 10^n and 10^{n+1} dollars. And let P(n) = 2^{-n}, n is an integer greater than zero. Here the distribution is normalizable, but it still seems like you should switch. Given that you’ve picked an envelope with, say, 10^m dollars, then either n=m-1 (i.e. you picked the larger envelope with probability 2^{-(m-1)}) or n=m (smaller envelope with probability 2^{-m}). This gives conditional probabilities of 2/3 of having the smaller envelope and 1/3 of having the larger. However, since the potential gain of switching is much larger it’s still worth it – the expected return from switching is a factor of 3.4.

    Any thoughts?

  16. “Like it or not, you have to pick some initial probability distribution for how much money was in the envelopes — and if that distribution is finite (”normalizable”), you can extract yourself from the original puzzle.”

    And what is the solution if the probability distribution is not finite?

  17. >>All you want to know is whether, upon seeing the amount of money in the first envelope, you have any probative evidence — or are, instead, just as ignorant (uncertain) as you were before. In order to think this through, you *don’t need to “pick an initial probability distribution” for all amounts of money.*<<

    Except that you do…any rule for deciding your state of knowledge after obtaining the information is equivalent to a choice for the prior probabilities.

  18. Nav– that’s a good extension of the puzzle. But I tend to believe the resolution that is alluded to right in the post you linked — the probability distribution over dollars is actually not normalizable. (The expected payoff is infinite.)

  19. Strether,

    You haven’t learned anything to favor the other envelope, which is why people tend to call this a paradox because they think the “math” tells them otherwise.

    In the 10^n with 2^(-n) variant it’s the same mistake. Re-stating the problem:
    * 2nd random event: they pick red or blue at random to place the larger amount.
    * 3rd random event: you pick red or blue at random.

    I didn’t even list the 1st random event because it’s irrelevant. You can draw a decision tree and assign any probability you want for generation of amounts in envelopes, but you’ll see that all that matters is whether their pick matches yours.

  20. @21, Koray, I *think* I’m trying to say the same thing you are: The math leads people astray because they insist on creating a “probability distribution” when the problem is really one of uncertainty, in the sense J.M. Keynes described. All that matters is whether you can find some “ticket” out of the fundamental uncertainty — like the idea of the upper bound on the payoff that Sean introduced.

    @18, B Brewer, no, you actually *don’t* need to choose a probability function (1) “before” opening the first envelope or (2) for any dollar amount *other than* 2x what’s in that envelope. So the whole idea of generalized (but finite) “initial probability distribution” isn’t doing any work here.

    Or look at it another way: 300 years ago, no one would have “picked an initial probability distribution,” because they didn’t know what that was. But they could still solve this puzzle (in situations when it can be solved at all) by common sense. The solution can be dressed up in “probability” language, but you’re really just assigning math symbols to a guess (which creates an illusion of hard rationality, which generates the “paradox,” because the true nature of the guess is buried in the math, rather than helpfully exposed by it).

  21. Interesting example, maybe this can be phrased as an anomaly: the symmetry between the two initial envelopes is broken by any attempt to regulate the problem. Not sure what this says about eternal inflation, the benefactor with unlimited resources is the spatially infinite universe, maybe assuming it is strictly infinite makes no logical sense?

  22. There would be a natural upper bound on the possibilities: there are only a finite number of dollars in circulation. Before we hit that limit, depending on the size of the envelopes, only a certain number of bills could fit inside (ask any drug dealer).

Comments are closed.

Scroll to Top