Boltzmann’s Universe

Boltzmann’s Brain CV readers, ahead of the curve as usual, are well aware of the notion of Boltzmann’s Brains — see e.g. here, here, and even the original paper here. Now Dennis Overbye has brought the idea to the hoi polloi by way of the New York Times. It’s a good article, but I wanted to emphasize something Dennis says quite explicitly, but (from experience) I know that people tend to jump right past in their enthusiasm:

Nobody in the field believes that this is the way things really work, however.

The point about Boltzmann’s Brains is not that they are a fascinating prediction of an exciting new picture of the multiverse. On the contrary, the point is that they constitute a reductio ad absurdum that is meant to show the silliness of a certain kind of cosmology — one in which the low-entropy universe we see is a statistical fluctuation around an equilibrium state of maximal entropy. According to this argument, in such a universe you would see every kind of statistical fluctuation, and small fluctuations in entropy would be enormously more frequent than large fluctuations. Our universe is a very large fluctuation (see previous post!) but a single brain would only require a relatively small fluctuation. In the set of all such fluctuations, some brains would be embedded in universes like ours, but an enormously larger number would be all by themselves. This theory, therefore, predicts that a typical conscious observer is overwhelmingly likely to be such a brain. But we (or at least I, not sure about you) are not individual Boltzmann brains. So the prediction has been falsified, and that kind of theory is not true. (For arguments along these lines, see papers by Dyson, Kleban, and Susskind, or Albrecht and Sorbo.)

I tend to find this kind of argument fairly persuasive. But the bit about “a typical observer” does raise red flags. In fact, folks like Hartle and Srednicki have explicitly argued that the assumption of our own “typicality” is completely unwarranted. Imagine, they say, two theories of life in the universe, which are basically indistinguishable, except that in one theory there is no life on Jupiter and in the other theory the Jovian atmosphere is inhabited by six trillion intelligent floating Saganite organisms.

In the second theory, a “typical” intelligent observer in the Solar System is a Jovian, not a human. But I’m a human. Have we therefore ruled out this theory? Pretty clearly not. Hartle and Srednicki conclude that it’s incorrect to imagine that we are necessarily typical; we are who we observe ourselves to be, and any theory of the universe that is compatible with observers like ourselves is just as good as any other such theory.

This is an interesting perspective, and the argument is ongoing. But it’s important to recognize that there is a much stronger argument against the idea that Boltzmann’s Brains were originally invented to counter — that our universe is just a statistical fluctuation around an equilibrium background. We might call this the “Boltzmann’s Universe” argument.

Here’s how it goes. Forget that we are “typical” or any such thing. Take for granted that we are exactly who we are — in other words, that the macrostate of the universe is exactly what it appears to be, with all the stars and galaxies etc. By the “macrostate of the universe,” we mean everything we can observe about it, but not the precise position and momentum of every atom and photon. Now, you might be tempted to think that you reliably know something about the past history of our local universe — your first kiss, the French Revolution, the formation of the cosmic microwave background, etc. But you don’t really know those things — you reconstruct them from your records and memories right here and now, using some basic rules of thumb and your belief in certain laws of physics.

The point is that, within this hypothetical thermal equilibrium universe from which we are purportedly a fluctuation, there are many fluctuations that reach exactly this macrostate — one with a hundred billion galaxies, a Solar System just like ours, and a person just like you with exactly the memories you have. And in the hugely overwhelming majority of them, all of your memories and reconstructions of the past are false. In almost every fluctuation that creates universes like the ones we see, both the past and the future have a higher entropy than the present — downward fluctuations in entropy are unlikely, and the larger the fluctuation the more unlikely it is, so the vast majority of fluctuations to any particular low-entropy configuration never go lower than that.

Therefore, this hypothesis — that our universe, complete with all of our records and memories, is a thermal fluctuation around a thermal equilibrium state — makes a very strong prediction: that our past is nothing like what we reconstruct it to be, but rather that all of our memories and records are simply statistical flukes created by an unlikely conspiracy of random motions. In this view, the photograph you see before you used to be yellow and wrinkled, and before that was just a dispersed collection of dust, before miraculously forming itself out of the chaos.

Note that this scenario makes no assumptions about our typicality — it assumes, to the contrary, that we are exactly who we (presently) perceive ourselves to be, no more and no less. But in this scenario, we have absolutely no right to trust any of our memories or reconstructions of the past; they are all just a mirage. And the assumptions that we make to derive that conclusion are exactly the assumptions we really do make to do conventional statistical mechanics! Boltzmann taught us long ago that it’s possible for heat to flow from cold objects to hot ones, or for cream to spontaneously segregate itself away from a surrounding cup of coffee — it’s just very unlikely. But when we say “unlikely” we have in mind some measure on the space of possibilities. And it’s exactly that assumed measure that would lead us to conclude, in this crazy fluctuation-world, that all of our notions of the past are chimeric.

Now, just like Boltzmann’s Brain, nobody believes this is true. In fact, you can’t believe it’s true, by any right. All of the logic you used to tell that story, and all of your ideas about the laws of physics, depend on your ability to reliably reconstruct the past. This scenario, in other words, is cognitively unstable; useful as a rebuke to the original hypothesis, but not something that can stand on its own.

So what are we to conclude? That our observed universe is not a statistical fluctuation around a thermal equilibrium state. That’s very important to know, but doesn’t pin down the truth. If the universe is eternal, and has a maximum value for its entropy, then we it would (almost always) be in thermal equilibrium. Therefore, either it’s not eternal, or there is no state of maximum entropy. I personally believe the latter, but there’s plenty of work to be done before we have any of this pinned down.

This entry was posted in Science, Time. Bookmark the permalink.

100 Responses to Boltzmann’s Universe

  1. Pingback: Chrononautic Log 改 » Blog Archive » And who was I talking to about Boltzmann brains?

  2. Pieter Kok says:

    Very interesting!

    Just one pedantic point, though: hoi polloi means “the people”, and hoi is the article. You should therefore write either “to the polloi” or “to hoi polloi”, but not “to the hoi polloi”.

  3. lylebot says:

    This theory, therefore, predicts that a typical conscious observer is overwhelmingly likely to be such a brain. But we (or at least I, not sure about you) are not individual Boltzmann brains. So the prediction has been falsified, and that kind of theory is not true.

    I know this isn’t the point of your post, so apologies for nitpicking, but I guess it seems to me that this argument is ignoring a great deal of information that would allow us to conclude that we are not Boltzmann brains despite Boltzmann brains being common. For example, conditioning on the facts that we were born of mothers and fathers that have brains, and that in principle the genes that govern brain formation can be identified, and that we can see the brain developing from fetus to adult, it seems incredibly unlikely that we could be Boltzmann brains.

  4. my one cent says:

    Forgive me if this has been covered in some previous post, and perhaps there is a subtlety in this argument that I am missing, but this sort of picture/analogy doesn’t sit well with me…

    Let’s throw out “brains” and “universes” for a minute and just imagione a single star. Say this star formed from an intially homogeneous universe filled with hydrogen gas. Now, that star could have just been formed by a random quantum fluctuation a few seconds ago that arranged all the hydrogen atoms correctly, but that is extremely unlikely. Instead it would be much easier for a quantum fluctuation to create a small overdensity in one location, that became unstable, drew in matter from around it and eventually formed a star. Thus in this case it is not neccesarily true that the past of the star is more likely to be a mirage than not.

    A similar case could be made for life and brains. Creating a random brain in space is unlikely. Withouut doing any math, it seems to me that it could easily be more likely that a brain appears by first having a smalll overdensity in one location, that forms a solar system (okay, maybe a few stars form before to make the other elements), which has a planet that evolves life, etc.

    In either case, it seems to me that the past is less likely to be a mirage than not, and that random brains are less likely than real brains. I therefore think this particular way of describiing issues with entropy in multiverse/anthropic universes may be apt to create unneccesary confusion. Perhaps there are better anaolgies would be ones where the total entropy is easier to quantify and compare, or maybe I’m just easily confused?

  5. Ken says:

    As an observational researcher, sometimes the theory side of the work makes my head spin. I can only assume that is a consequence of the differing ways theorists and observationalists go about solving a puzzle. In the short term, it sure will be nice when LHC goes online to start providing data which can be used as ammunition in these debates!

  6. Sean says:

    lylebot, this is basically the point of the post — if the universe is a fluctuation around thermal equilibrium, then no matter what you condition on concerning our present state (including literally everything we know about it), it is overwhelmingly likely that it is a random fluctuation from a higher-entropy past. Even if we have memories apparently to the contrary!

    Think of it this way: consider a half-melted ice cube in a glass of water. We think that it’s much more likely that five minutes ago it was a completely unmelted cube, rather than a homogeneous glass of water out of which the half-melted cube spontaneously arose. However, that’s only because we know we are nowhere near thermal equilibrium. If the glass were a closed system that lasted forever — i.e., much longer than the Poincare recurrence time — then we will much more often find spontaneous half-melted cubes then ones that arose “normally” from (lower-entropy) unmelted ones.

    my one cent, a similar argument applies to your question. It’s utterly unfair, given the hypothesis of a universe in thermal equilibrium, to start with a homogeneous gas — in a theory with gravity, that’s a dramatically low-entropy state!

    All of these arguments are simply the same arguments that usually imply that entropy will increase to the future, except run to the past — which is the wrong thing to do in the usual picture where we assume a low-entropy past boundary condition, but absolutely the right thing to do if the universe is in thermal equilibrium.

  7. Jeff Harvey says:

    “When you break an egg and scramble it you are doing cosmology,” said Sean Carroll, a cosmologist at the California Institute of Technology.

    When I break an egg and scramble it I’m making breakfast. I guess that is
    the difference between cosmologists and particle physicists.

  8. Peter Woit says:


    “it sure will be nice when LHC goes online to start providing data which can be used as ammunition in these debates!”

    No data that comes out of the LHC (or out of anywhere else) will have anything to do with this debate. That’s why most serious scientists see this as, to quote Overbye, “further evidence that cosmologists… have finally lost their minds.”

  9. kelley elkins says:

    This is all very left brain… rational , logical and within the confines of time. So, to anyone with half the facility of a right brain, ie, creativity, emotions, the arts, …this is totally without much merit.
    We could even say ” left brain words are at best an honest lie”… and yet we garble on in hopes for some recognition or approval. which in turn means we’ve said nothing. In the right brain none of this matters, because the right brain can create universes faster than the left brain can de-construct them.
    However, this is all very well written brain salad, complete with entropy and thermal this and that. I recommend “Dynamics of Time and Space” by Tarthang Tulku, 1994, Dharma Publishing. After all, there is nothing outside of ourselves. We are continuously making it all up and validating ourselves and each other as we do it. Until we are willing to go inside and see/feel/know the “creative process” we haven’t much to say.
    And about the time all this is figured out it will be recognized that it has changed or evaporated…it is much like locating the edge of an electron when in fact the electron really isn’t there. Or is it? And if it is there, where did it come from and where did it go? It quickly becomes a left brain chicken and egg..or if you prefer, Schrodinger’s cat.
    Truly, lots of fun and certainly not to be taken seriously.

  10. George Musser says:

    Sean, what do you think of the quote from Bousso to the effect that very low probability events can be discounted altogether?

  11. Chemicalscum says:

    Sean you said:

    If the universe is eternal, and has a maximum value for its entropy, then we it would (almost always) be in thermal equilibrium.

    Sorry for being a dumb chemist, but would it not be the case that if the maximum value for entropy was asymptotically approached at infinity starting from a low entropy boundary condition then the universe would never be in thermal equilibrium ?

  12. Mike says:

    Sean, you repeat an argument I’ve heard you give before: if we assume the universe is a random fluctuation, then it’s far more likely that the universe just formed, and that all historical evidence suggesting otherwise is coincidental, while it’s far less likely that the universe followed the history it appears to have.

    However, I wonder if something is missing from this argument. First, as I understand it, the entropy counting follows traditional entropy definitions, which count the number of accessible states at a given energy. The larger entropy configuration is more likely because there are a larger number of possible states. But this conclusion is based on the states being considered ‘equivalent’. Yet all universes with the entropy of the present universe are NOT equivalent. For example, the overwhelming majority of these will not conspire to present a false history.

    Furthermore, I don’t think there’s a one-to-one mapping between present states and early universe states. In some sense, information is ‘created’ as the universe evolves. That is, (I think) a fully specified quantum state for the very early universe can result in a large number of fully specified quantum states for later universes. These later universes will have galaxies and planets and maybe intelligent life, but details need not be like the details of our universe; however all will present a sensible history to any observers. What I think this means is, counting states for the early universe somehow “undercounts” since the possible “histories” far exceeds the number of initial states. Add this to the above observation that counting states for the present universe by far “overcounts,” since most of these states don’t conspire to present a sensible history, and I think the comparison between the likelihood of these possibilities is not as straightforward as it at first seems.

  13. Neil B. says:

    Just curious, umm, how does anyone “get off the ground” what’s supposed to be existing anyway (and that’s not even a clear strictly logical concept) whether it’s “just this” or the “multiverse”? I mean, the relative population of various universes, what the laws about the chance of laws are etc, what in the world, so to speak, are you going on? I know, that’s not strictly the problem as presented, but I gather that does affect how one is going to start thinking about it. (i.e., I assume the argument isn’t just simply about what to expect about statistical fluctuation given the universe/laws as is/known, correct me if wrong.)

  14. Pingback: Not Even Wrong » Blog Archive » Have Cosmologists Lost Their Brains?

  15. Moshe says:

    George, Raphael is quoted as saying “anytime your measure predicts that something we see has extremely small probability, you can throw it out”. I think the “it” refers to the probability measure, not the event. For what it’s worth, I am slightly uncomfortable with the methodology expressed in that sentence…There is also the independently interesting issue of what to make of exponentially small probabilities, and whether or not the concept of probabilities makes sense for them.

  16. John Merryman says:

    When you scramble an egg, its ordered state goes from present to past, so the arrow of time for the order points to the past, while the raw energy(protein) goes toward some future state. Now the assumption seems to be that the universe is an egg in the process of being scrambled, so yes, its present order is passing, but what if, rather then multiple universes to cover all the probabilities, we have an infinite universe where various fluctuations, such as the fork and the egg, are constantly coming together and creating new forms out of the same energy? Rather then a narrative unit, going from singularity to fadeout, it is endless cycles of energy going into the future, as order goes into the past.

  17. MedallionOfFerret says:

    “After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the nonexistence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it — ‘I refute it thus.'”

    –Boswell, Life of Samuel Johnson

    Hoi polloi physics.

  18. Sean says:

    George — I agree with Moshe. Bousso is saying that a theory that predicts that the universe we currently observe is a low-probability event (compared to some other set of universes), that theory is no good. Makes sense to me, although you have to be careful about how you compare.

    Chemicalscum — Since entropy comes from coarse graining, you wouldn’t just asymptotically approach the maximum, you would actually get there. And then, following Boltzmann, you would occasionally fluctuate to lower-entropy states.

  19. George Musser says:

    Sean (and Moshe), in that case, I’m left wondering what to think about the double-exponentials in your arrow-of-time explanation.

  20. Sean says:

    Mike — I am assuming, as you say, a uniform distribution on the space of microstates compatible with our current macrostate. But I think this is simply what is predicted in a model with an eternal universe cycling through its state space ala Poincare. Furthermore, it’s certainly what we use when we do ordinary future-directed stat mech, so I wouldn’t want to abandon it without good reason.

    For the second point, I suspect you are being a temporal chauvinist. In principle it is just as likely/unlikely for the early/late universe to have one/many different quantum states. (And again, an isolated finite quantum system would generally sample all of the possibilities subject to appropriate constraints.) Unless you honestly want to violate unitarity, which is okay, but you would have to be pretty explicit about how that would work.

  21. Sean says:

    George redux — the point is how low-probability a certain event is compared to some other low-probability event. Any theory, including mine, has the burden of showing that “ordinary” observers (conditionalized over some appropriate set of features) are more likely (“less low-probability”) than isolated “freak” observers. That would be, to put it mildly, work in progress.

  22. jpd says:

    regarding the glass of water analogy,
    comment #6, aren’t you assuming the temperature
    of the glass is above freezing?

    if the temperature was lower than 0,
    upon seeing a half melted ice cube,
    i would expect more ice in the glass at
    times earlier and later than my observation

  23. jpd says:

    thats 0 celsius of course, sorry about that

  24. Eugene says:

    Actually I think Bousso’s “it” refers to the event, not the measure. Because if he meant the measure, than he is wrong.

    If your measure predicts something with very low probability, that does not say that your measure is bad. In this probability business, measures are just constructs that you invent in order to sieve through your theories, and they are not theories themselves. For example, I can construct a measure (and I have!), and according to my measure Theory A (high probability) is more likely to produce a universe like ours than Theory B (low probability). I can’t use the probability that I compute for Theory A to throw out my measure.

  25. Eugene says:

    Uhh, I meant “Theory B” in the last sentence.

    That was a response to Sean, Moshe etc.