Boltzmann’s Universe

Boltzmann’s Brain CV readers, ahead of the curve as usual, are well aware of the notion of Boltzmann’s Brains — see e.g. here, here, and even the original paper here. Now Dennis Overbye has brought the idea to the hoi polloi by way of the New York Times. It’s a good article, but I wanted to emphasize something Dennis says quite explicitly, but (from experience) I know that people tend to jump right past in their enthusiasm:

Nobody in the field believes that this is the way things really work, however.

The point about Boltzmann’s Brains is not that they are a fascinating prediction of an exciting new picture of the multiverse. On the contrary, the point is that they constitute a reductio ad absurdum that is meant to show the silliness of a certain kind of cosmology — one in which the low-entropy universe we see is a statistical fluctuation around an equilibrium state of maximal entropy. According to this argument, in such a universe you would see every kind of statistical fluctuation, and small fluctuations in entropy would be enormously more frequent than large fluctuations. Our universe is a very large fluctuation (see previous post!) but a single brain would only require a relatively small fluctuation. In the set of all such fluctuations, some brains would be embedded in universes like ours, but an enormously larger number would be all by themselves. This theory, therefore, predicts that a typical conscious observer is overwhelmingly likely to be such a brain. But we (or at least I, not sure about you) are not individual Boltzmann brains. So the prediction has been falsified, and that kind of theory is not true. (For arguments along these lines, see papers by Dyson, Kleban, and Susskind, or Albrecht and Sorbo.)

I tend to find this kind of argument fairly persuasive. But the bit about “a typical observer” does raise red flags. In fact, folks like Hartle and Srednicki have explicitly argued that the assumption of our own “typicality” is completely unwarranted. Imagine, they say, two theories of life in the universe, which are basically indistinguishable, except that in one theory there is no life on Jupiter and in the other theory the Jovian atmosphere is inhabited by six trillion intelligent floating Saganite organisms.

In the second theory, a “typical” intelligent observer in the Solar System is a Jovian, not a human. But I’m a human. Have we therefore ruled out this theory? Pretty clearly not. Hartle and Srednicki conclude that it’s incorrect to imagine that we are necessarily typical; we are who we observe ourselves to be, and any theory of the universe that is compatible with observers like ourselves is just as good as any other such theory.

This is an interesting perspective, and the argument is ongoing. But it’s important to recognize that there is a much stronger argument against the idea that Boltzmann’s Brains were originally invented to counter — that our universe is just a statistical fluctuation around an equilibrium background. We might call this the “Boltzmann’s Universe” argument.

Here’s how it goes. Forget that we are “typical” or any such thing. Take for granted that we are exactly who we are — in other words, that the macrostate of the universe is exactly what it appears to be, with all the stars and galaxies etc. By the “macrostate of the universe,” we mean everything we can observe about it, but not the precise position and momentum of every atom and photon. Now, you might be tempted to think that you reliably know something about the past history of our local universe — your first kiss, the French Revolution, the formation of the cosmic microwave background, etc. But you don’t really know those things — you reconstruct them from your records and memories right here and now, using some basic rules of thumb and your belief in certain laws of physics.

The point is that, within this hypothetical thermal equilibrium universe from which we are purportedly a fluctuation, there are many fluctuations that reach exactly this macrostate — one with a hundred billion galaxies, a Solar System just like ours, and a person just like you with exactly the memories you have. And in the hugely overwhelming majority of them, all of your memories and reconstructions of the past are false. In almost every fluctuation that creates universes like the ones we see, both the past and the future have a higher entropy than the present — downward fluctuations in entropy are unlikely, and the larger the fluctuation the more unlikely it is, so the vast majority of fluctuations to any particular low-entropy configuration never go lower than that.

Therefore, this hypothesis — that our universe, complete with all of our records and memories, is a thermal fluctuation around a thermal equilibrium state — makes a very strong prediction: that our past is nothing like what we reconstruct it to be, but rather that all of our memories and records are simply statistical flukes created by an unlikely conspiracy of random motions. In this view, the photograph you see before you used to be yellow and wrinkled, and before that was just a dispersed collection of dust, before miraculously forming itself out of the chaos.

Note that this scenario makes no assumptions about our typicality — it assumes, to the contrary, that we are exactly who we (presently) perceive ourselves to be, no more and no less. But in this scenario, we have absolutely no right to trust any of our memories or reconstructions of the past; they are all just a mirage. And the assumptions that we make to derive that conclusion are exactly the assumptions we really do make to do conventional statistical mechanics! Boltzmann taught us long ago that it’s possible for heat to flow from cold objects to hot ones, or for cream to spontaneously segregate itself away from a surrounding cup of coffee — it’s just very unlikely. But when we say “unlikely” we have in mind some measure on the space of possibilities. And it’s exactly that assumed measure that would lead us to conclude, in this crazy fluctuation-world, that all of our notions of the past are chimeric.

Now, just like Boltzmann’s Brain, nobody believes this is true. In fact, you can’t believe it’s true, by any right. All of the logic you used to tell that story, and all of your ideas about the laws of physics, depend on your ability to reliably reconstruct the past. This scenario, in other words, is cognitively unstable; useful as a rebuke to the original hypothesis, but not something that can stand on its own.

So what are we to conclude? That our observed universe is not a statistical fluctuation around a thermal equilibrium state. That’s very important to know, but doesn’t pin down the truth. If the universe is eternal, and has a maximum value for its entropy, then we it would (almost always) be in thermal equilibrium. Therefore, either it’s not eternal, or there is no state of maximum entropy. I personally believe the latter, but there’s plenty of work to be done before we have any of this pinned down.

This entry was posted in Science, Time. Bookmark the permalink.

100 Responses to Boltzmann’s Universe

  1. Pingback: Chrononautic Log 改 » Blog Archive » And who was I talking to about Boltzmann brains?

  2. Pieter Kok says:

    Very interesting!

    Just one pedantic point, though: hoi polloi means “the people”, and hoi is the article. You should therefore write either “to the polloi” or “to hoi polloi”, but not “to the hoi polloi”.

  3. lylebot says:

    This theory, therefore, predicts that a typical conscious observer is overwhelmingly likely to be such a brain. But we (or at least I, not sure about you) are not individual Boltzmann brains. So the prediction has been falsified, and that kind of theory is not true.

    I know this isn’t the point of your post, so apologies for nitpicking, but I guess it seems to me that this argument is ignoring a great deal of information that would allow us to conclude that we are not Boltzmann brains despite Boltzmann brains being common. For example, conditioning on the facts that we were born of mothers and fathers that have brains, and that in principle the genes that govern brain formation can be identified, and that we can see the brain developing from fetus to adult, it seems incredibly unlikely that we could be Boltzmann brains.

  4. my one cent says:

    Forgive me if this has been covered in some previous post, and perhaps there is a subtlety in this argument that I am missing, but this sort of picture/analogy doesn’t sit well with me…

    Let’s throw out “brains” and “universes” for a minute and just imagione a single star. Say this star formed from an intially homogeneous universe filled with hydrogen gas. Now, that star could have just been formed by a random quantum fluctuation a few seconds ago that arranged all the hydrogen atoms correctly, but that is extremely unlikely. Instead it would be much easier for a quantum fluctuation to create a small overdensity in one location, that became unstable, drew in matter from around it and eventually formed a star. Thus in this case it is not neccesarily true that the past of the star is more likely to be a mirage than not.

    A similar case could be made for life and brains. Creating a random brain in space is unlikely. Withouut doing any math, it seems to me that it could easily be more likely that a brain appears by first having a smalll overdensity in one location, that forms a solar system (okay, maybe a few stars form before to make the other elements), which has a planet that evolves life, etc.

    In either case, it seems to me that the past is less likely to be a mirage than not, and that random brains are less likely than real brains. I therefore think this particular way of describiing issues with entropy in multiverse/anthropic universes may be apt to create unneccesary confusion. Perhaps there are better anaolgies would be ones where the total entropy is easier to quantify and compare, or maybe I’m just easily confused?

  5. Ken says:

    As an observational researcher, sometimes the theory side of the work makes my head spin. I can only assume that is a consequence of the differing ways theorists and observationalists go about solving a puzzle. In the short term, it sure will be nice when LHC goes online to start providing data which can be used as ammunition in these debates!

  6. Sean says:

    lylebot, this is basically the point of the post — if the universe is a fluctuation around thermal equilibrium, then no matter what you condition on concerning our present state (including literally everything we know about it), it is overwhelmingly likely that it is a random fluctuation from a higher-entropy past. Even if we have memories apparently to the contrary!

    Think of it this way: consider a half-melted ice cube in a glass of water. We think that it’s much more likely that five minutes ago it was a completely unmelted cube, rather than a homogeneous glass of water out of which the half-melted cube spontaneously arose. However, that’s only because we know we are nowhere near thermal equilibrium. If the glass were a closed system that lasted forever — i.e., much longer than the Poincare recurrence time — then we will much more often find spontaneous half-melted cubes then ones that arose “normally” from (lower-entropy) unmelted ones.

    my one cent, a similar argument applies to your question. It’s utterly unfair, given the hypothesis of a universe in thermal equilibrium, to start with a homogeneous gas — in a theory with gravity, that’s a dramatically low-entropy state!

    All of these arguments are simply the same arguments that usually imply that entropy will increase to the future, except run to the past — which is the wrong thing to do in the usual picture where we assume a low-entropy past boundary condition, but absolutely the right thing to do if the universe is in thermal equilibrium.

  7. Jeff Harvey says:

    “When you break an egg and scramble it you are doing cosmology,” said Sean Carroll, a cosmologist at the California Institute of Technology.

    When I break an egg and scramble it I’m making breakfast. I guess that is
    the difference between cosmologists and particle physicists.

  8. Peter Woit says:

    Ken,

    “it sure will be nice when LHC goes online to start providing data which can be used as ammunition in these debates!”

    No data that comes out of the LHC (or out of anywhere else) will have anything to do with this debate. That’s why most serious scientists see this as, to quote Overbye, “further evidence that cosmologists… have finally lost their minds.”

  9. kelley elkins says:

    This is all very left brain… rational , logical and within the confines of time. So, to anyone with half the facility of a right brain, ie, creativity, emotions, the arts, …this is totally without much merit.
    We could even say ” left brain words are at best an honest lie”… and yet we garble on in hopes for some recognition or approval. which in turn means we’ve said nothing. In the right brain none of this matters, because the right brain can create universes faster than the left brain can de-construct them.
    However, this is all very well written brain salad, complete with entropy and thermal this and that. I recommend “Dynamics of Time and Space” by Tarthang Tulku, 1994, Dharma Publishing. After all, there is nothing outside of ourselves. We are continuously making it all up and validating ourselves and each other as we do it. Until we are willing to go inside and see/feel/know the “creative process” we haven’t much to say.
    And about the time all this is figured out it will be recognized that it has changed or evaporated…it is much like locating the edge of an electron when in fact the electron really isn’t there. Or is it? And if it is there, where did it come from and where did it go? It quickly becomes a left brain chicken and egg..or if you prefer, Schrodinger’s cat.
    Truly, lots of fun and certainly not to be taken seriously.

  10. George Musser says:

    Sean, what do you think of the quote from Bousso to the effect that very low probability events can be discounted altogether?
    George

  11. Chemicalscum says:

    Sean you said:

    If the universe is eternal, and has a maximum value for its entropy, then we it would (almost always) be in thermal equilibrium.

    Sorry for being a dumb chemist, but would it not be the case that if the maximum value for entropy was asymptotically approached at infinity starting from a low entropy boundary condition then the universe would never be in thermal equilibrium ?

  12. Mike says:

    Sean, you repeat an argument I’ve heard you give before: if we assume the universe is a random fluctuation, then it’s far more likely that the universe just formed, and that all historical evidence suggesting otherwise is coincidental, while it’s far less likely that the universe followed the history it appears to have.

    However, I wonder if something is missing from this argument. First, as I understand it, the entropy counting follows traditional entropy definitions, which count the number of accessible states at a given energy. The larger entropy configuration is more likely because there are a larger number of possible states. But this conclusion is based on the states being considered ‘equivalent’. Yet all universes with the entropy of the present universe are NOT equivalent. For example, the overwhelming majority of these will not conspire to present a false history.

    Furthermore, I don’t think there’s a one-to-one mapping between present states and early universe states. In some sense, information is ‘created’ as the universe evolves. That is, (I think) a fully specified quantum state for the very early universe can result in a large number of fully specified quantum states for later universes. These later universes will have galaxies and planets and maybe intelligent life, but details need not be like the details of our universe; however all will present a sensible history to any observers. What I think this means is, counting states for the early universe somehow “undercounts” since the possible “histories” far exceeds the number of initial states. Add this to the above observation that counting states for the present universe by far “overcounts,” since most of these states don’t conspire to present a sensible history, and I think the comparison between the likelihood of these possibilities is not as straightforward as it at first seems.

  13. Neil B. says:

    Just curious, umm, how does anyone “get off the ground” what’s supposed to be existing anyway (and that’s not even a clear strictly logical concept) whether it’s “just this” or the “multiverse”? I mean, the relative population of various universes, what the laws about the chance of laws are etc, what in the world, so to speak, are you going on? I know, that’s not strictly the problem as presented, but I gather that does affect how one is going to start thinking about it. (i.e., I assume the argument isn’t just simply about what to expect about statistical fluctuation given the universe/laws as is/known, correct me if wrong.)

  14. Pingback: Not Even Wrong » Blog Archive » Have Cosmologists Lost Their Brains?

  15. Moshe says:

    George, Raphael is quoted as saying “anytime your measure predicts that something we see has extremely small probability, you can throw it out”. I think the “it” refers to the probability measure, not the event. For what it’s worth, I am slightly uncomfortable with the methodology expressed in that sentence…There is also the independently interesting issue of what to make of exponentially small probabilities, and whether or not the concept of probabilities makes sense for them.

  16. John Merryman says:

    When you scramble an egg, its ordered state goes from present to past, so the arrow of time for the order points to the past, while the raw energy(protein) goes toward some future state. Now the assumption seems to be that the universe is an egg in the process of being scrambled, so yes, its present order is passing, but what if, rather then multiple universes to cover all the probabilities, we have an infinite universe where various fluctuations, such as the fork and the egg, are constantly coming together and creating new forms out of the same energy? Rather then a narrative unit, going from singularity to fadeout, it is endless cycles of energy going into the future, as order goes into the past.

  17. MedallionOfFerret says:

    “After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the nonexistence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it — ‘I refute it thus.'”

    –Boswell, Life of Samuel Johnson

    Hoi polloi physics.

  18. Sean says:

    George — I agree with Moshe. Bousso is saying that a theory that predicts that the universe we currently observe is a low-probability event (compared to some other set of universes), that theory is no good. Makes sense to me, although you have to be careful about how you compare.

    Chemicalscum — Since entropy comes from coarse graining, you wouldn’t just asymptotically approach the maximum, you would actually get there. And then, following Boltzmann, you would occasionally fluctuate to lower-entropy states.

  19. George Musser says:

    Sean (and Moshe), in that case, I’m left wondering what to think about the double-exponentials in your arrow-of-time explanation.
    George

  20. Sean says:

    Mike — I am assuming, as you say, a uniform distribution on the space of microstates compatible with our current macrostate. But I think this is simply what is predicted in a model with an eternal universe cycling through its state space ala Poincare. Furthermore, it’s certainly what we use when we do ordinary future-directed stat mech, so I wouldn’t want to abandon it without good reason.

    For the second point, I suspect you are being a temporal chauvinist. In principle it is just as likely/unlikely for the early/late universe to have one/many different quantum states. (And again, an isolated finite quantum system would generally sample all of the possibilities subject to appropriate constraints.) Unless you honestly want to violate unitarity, which is okay, but you would have to be pretty explicit about how that would work.

  21. Sean says:

    George redux — the point is how low-probability a certain event is compared to some other low-probability event. Any theory, including mine, has the burden of showing that “ordinary” observers (conditionalized over some appropriate set of features) are more likely (“less low-probability”) than isolated “freak” observers. That would be, to put it mildly, work in progress.

  22. jpd says:

    hi,
    regarding the glass of water analogy,
    comment #6, aren’t you assuming the temperature
    of the glass is above freezing?

    if the temperature was lower than 0,
    upon seeing a half melted ice cube,
    i would expect more ice in the glass at
    times earlier and later than my observation

  23. jpd says:

    thats 0 celsius of course, sorry about that

  24. Eugene says:

    Actually I think Bousso’s “it” refers to the event, not the measure. Because if he meant the measure, than he is wrong.

    If your measure predicts something with very low probability, that does not say that your measure is bad. In this probability business, measures are just constructs that you invent in order to sieve through your theories, and they are not theories themselves. For example, I can construct a measure (and I have!), and according to my measure Theory A (high probability) is more likely to produce a universe like ours than Theory B (low probability). I can’t use the probability that I compute for Theory A to throw out my measure.

  25. Eugene says:

    Uhh, I meant “Theory B” in the last sentence.

    That was a response to Sean, Moshe etc.

  26. Moshe says:

    Eugene, this is precisely what makes me uncomfortable about Raphael’s sentence, but as a matter of simple grammar (and looking at the context) the “it” in that sentence definitely refers to the measure- if your favorite measure has produced nonsensical results, you are instructed to use a different measure.

  27. Moshe says:

    BTW, the topic that George raised is really interesting, I vaguely remember reading about various issues that arise if you take the concept of probability seriously for arbitrarily low probabilities (or equivalently for arbitrarily large ensembles). It wouldn’t surprise me that concepts that rely on ensembles being strictly infinite (as opposed to “practically infinite”, namely finite but really large compared to anything else in the problem) can lead to paradoxical results.

  28. Raphael Bousso says:

    Just to clarify: By “it”, I did mean the measure. Ultimately, I would argue that the measure is an integral part of the theory, but for now it is useful to think of them separately. Then Eugene is right: A better statement would be to say that if my theory together with my measure predict that something we do observe has probability very close to 0, then either my theory or my measure are probably wrong. This is called ruling out a theory (or a measure) and it’s no different from what we always do in science, except in that we include the measure in the set of theoretical assumptions that could be wrong. (We say this more clearly in our recent paper, so I will not repeat it here.)

    It is not uncommon, however, to find situations in which a prediction depends extremely sensitively on the choice of measure, but is robust over a wide class of theories (the youngness paradox and the Boltzmann brain problem are examples). Such situations can be useful as we try to weed out measures and find the right one. I suspect that most of us feel that we know far less about the correct choice of measure than about plausible dynamical theories. When a chain of arguments leads to false predictions, we should first question the weakest link in the chain, which in my judgement is probably the measure. This is the context of the sentence that was quoted in Overbye’s article.

  29. san says:

    Is the probability of Boltzmann brains quantified? Or do you mean probabilities in the logical sense. Using the assumptions that time is infinite, that configurations exist that create universes and brains, therefore eventually all universes and brains will exist. But, just because time is infinite, and configurations that can give rise to universes and brains exist now doesn’t mean they will keep existing. These configurations could disappear forever.

  30. Low Math, Meekly Interacting says:

    I’m wondering if the practice of contemplating an infinity of something doesn’t scare people off nearly as well as it should.

  31. Cynthia says:

    Dennis Overbye writes,

    “A different, but perhaps related, form of antigravity, glibly dubbed dark energy, seems to be running the universe now, and that is the culprit responsible for the Boltzmann brains.”

    Then he goes goes to write,

    “With no more acceleration [or cosmological constant] there would be no horizon with its snap, crack and pop, and thus no material for fluctuations and Boltzmann brains.”

    Correct me if I’m wrong, but both of these passages seem to be saying that Dark Energy is the Brains behind behind Boltzmann brains…

  32. jeff says:

    And of course, these boltzman brains assume that the functionalist view of mind is valid. Last time I checked, we still weren’t sure exactly what consciousness is.

  33. Claire says:

    Could be Boltzmann’s Branes not Brains.

    Sean said, “if the universe is a fluctuation around thermal equilibrium, then no matter what you condition on concerning our present state (including literally everything we know about it), it is overwhelmingly likely that it is a random fluctuation from a higher-entropy past.”

    What if the higher-entropy past of 2 or more trillion universes, say that they are both/all achieved by fluctuation around each and their own thermal equilibrium, are

    1) a closed past and
    2) (could be at the same time) are with an open past?

    Here I was referring to the half frozen ice in a glass idea Sean said, just wondered what shape our past universe could be in comparison to others and would this not affect the comment Sean says,

    “Even if we have memories apparently to the contrary?”.

    It’s interesting how it fits with the state towards the individual thermal equilibriums. If the processes of any fluctuations around thermal equilibriums are different and do join, does the type of observation used not affect the linearness of both or more probability functions being calculated at that time (like grand vector reduction on a cosmic scale)? And was wondering here if configurations would introduce branes worlds as well as brain worlds Boltzmann style. If the probability method we use creates a hypothetical guy in another brane, who happens to find us using their own probability method, would that event not mean we are using the same distribution method but in an odd way?

    Mind you, the guys in the other universes may have already discovered a better way to discover us via a better probability method but we still might not be here according to them, well not yet anyway and maybe that’s what happening to us. Maybe we are figment of their imaginations.

    Sean continues “Even if we have memories apparently to the contrary!”

    Important.

    Claire

  34. Yahoo says:

    Just a quick comment on Sean’s final paragraph: “Either it’s not eternal, or there is no state of maximum entropy”. True, but: note that even if the universe is not eternal, this doesn’t help in itself. For example, if the universe is created from nothing, it is still most likely to be created at or close to equilibrium. On the other hand, even if there is no state of maximum entropy, one still has to explain why the entropy is so small *now* —- it should be in a high-entropy state [evolving eternally to still higher entropy states]. So the eternal/non-eternal question is not relevant in itself, though of course it will be important in constructing an explanation of the current smallness of the entropy.
    [Not suggesting that Sean intended anything else, just commenting.]

  35. Pingback: Not Again « The truth makes me fret.

  36. Paul Valletta says:

    The issue of “Why can’t you unscramble an egg”, is factored by the fact, the person who actually randomly invented scrambled eggs, bares a lot of responsibility for the Universe’s evolution? Eggs, in their natural evolutionary format, would NEVER become scrambled, thus the fact scrambled eggs can happen is based on an “unatural” cause/event? left to a natural path, the egg would not become scrambled, it takes an intervention by a conscious observer..chef…gourmet or an accident prone basic cook to produce scrambled eggs!

    Thus, the person who intentionally, or accidentally invented scrambled eggs, is responsible for some of the most profound and deep questions relating to the Universe and it’s scrambled egg content, without scrambled eggs the Universe would not be as “eggsciting” to some as it is to others, of course it could be that ordinary eggs, are really “unscrambled, scrambled eggs” , so the question is really this:Why is it you CAN scramble eggs?

    Q:What came first the chaotic boltzmann chicken, or the low-entropic unscrambled egg?

    Come to think of it..who put them on the cosmic menu!

  37. godma says:

    Okay, so the most likely case at any moment is that our memories are fictitious. I get that.

    But then how can we trust our measurements/calculations of the change in entropy? We can’t because it requires trusting memories.

  38. Eugene says:

    Raphael (and Moshe),

    Thanks for the response and clarification! I do agree with you that ultimately if we want to make sense of this whole business, the measure has to be in some way part of the whole “theory” although I really have very little intuition about how that is going to be. The problem with the separate theory/measure picture is that, as the article implies, we have no objective way of deciding which measure is the correct one.

    Eugene

  39. Eugene says:

    I hasten to add that while we don’t have any inkling about which measure is the “correct” one, I think we should be able to write down a set of rules of which any measure-builder has to follow (such as gauge-invariance etc). I don’t think we know all these rules yet. My personal feeling is that thinking about these rules are the way to go, and perhaps can lead us to fundamental theories themselves.

  40. Pingback: Meeting:Thursday January 17 « The Dark Blog

  41. Hal S says:

    The Boltzmann Brain paradox is based on a false assumption.

    The proof can be found on Wikipedia under the entry for Gibbs Entropy:

    “An overestimation of entropy will occur if all correlations and more generally if statistical dependence between the state probabilities are ignored.”

    The evolution of a human brain is very statistically dependent, and in fact the vacuum fluctuation that is required to produce a human brain with all its memories, etc, will be of the same order of magnitude, or greater, than one that would produce the contemporary universe.

    We can not escape the fact that all observers in our universe have interacted and co-evolved with the entire universe, therefore the information represented by a brain includes all the relevant information about our universe. For instance, it includes information about nucleosynthesis, evolution, physical constants, physics, math, etc.

    The statistically dependent nature of our normal existance means that for us to be here now, a chain of events must have occured, and must have occured in a unique ordered fashion. It is far less probable that a disembodied brain, or a chair, or a watch would pop into existance, than the entire current universe exactly as it is; simply because such a random object would have less entropy and represent a higher state of order if it appeared at random without sitting in its appropriate place in a universe.

    With this logic we can postulate the following:

    Any fluctuation sufficient enough to replicate the highly ordered information in a human mind would have to be large enough to replicate the highly ordered information in the entire connected universe.

    Any fluctuation that would replicate a human mind in the past (where things are more ordered) would have to be greater than one that would capture a human mind in the present.

    With respect to time, we can make the following argument:

    Suppose we envision a universe that jumps from state to state based on useable energy? That in order to jump to a new state I have to “use” energy? Suppose we also say that the energy is “used” to store information of the present state? We have observed in our everyday lives that information is lost in a storage medium as time progresses; from this observation we may conclude that we lose information about earlier states as we progress from state to state, however, we also observe that not all the information is ever completely lost, and we might also conclude we will never lose all the information of a previous state. From this perspective, a jump into a previous state would represent an increase of available energy since stored information would be redundant with the new information provided.

    In other words; we can not go back in time because we can’t get rid of information that we have retained about the past, so we can never go back to a previous state because the previous state can not match the present state. Any state we moved into that “looked like” a previous state, is in fact a completely new “future” state that is a facsimile of the previous state (because the only way we could make the comparison with a previous state is if we retained information on that previous state).

    If we could go back, we would basically be “resetting the clock” and we would have no recollection of the future states, as any recollection would mean that you are not in a past state, but in a future state.

    To sum up the time argument:

    We will never lose all the information of a previous state.

    The only way we could make the comparison with a previous state is if we retained information on that previous state.

    Any movement into the past would require a complete loss of information about “future” states, otherwise you are not moving into a truly past state, only one that is a facsimile state in the future.

    If we could every completely causaly disconnect ourselves from our past states, we would effectively be back at our initial state

    To conclude again: Boltzmann Brain paradox is akin to the watchmaker argument; the fundamental flaw is in the initial assumption. Or rather, theologians often naively assume that information encoded in the structure of a wristwatch is less than that encoded by the Earth, when in fact it is of the same order or larger.

    In the Boltzmann Brain paradox, people assume that the information encoded in a human brain is less than that of the universe, when they in fact are of the same order.

  42. Pingback: Boltzmann Brains « QED

  43. Not Required says:

    I have to confess that I don’t understand the relationship between what Sean said and the Overbye article. Sean says that BBs show that the arrow of time cannot be explained by means of a fluctuation. Fine. But what has that got to do with the ideas of the other people cited in the article? They don’t seem to be talking about the AOT at all! Furthermore, if the BB problem really is a problem, then it arises in plain old deSitter space and doesn’t have anything to do with the multiverse — if you wait long enough in ordinary de Sitter space you will see strange things too. Would Sean be so kind as to explain the links between all these apparently disparate things?

  44. Peter Ashby says:

    Please forgive a mere biologist from intruding but I find the ice cube in a glass analogy fallacious for the simple reason that water accreting together to freeze does not form melting ice cube structures. So the fact that the cube is melting demonstrates an arrow of time and thus refutes that it has formed probabilistically.

    As for the historicity argument wrt scrambled eggs it gets better, scrambled eggs include milk or milk products so you need not just ovoviparous reptile descendants but lactating mammals too.

  45. Hal S says:

    “The point about Boltzmann’s Brains is not that they are a fascinating prediction of an exciting new picture of the multiverse. On the contrary, the point is that they constitute a reductio ad absurdum that is meant to show the silliness of a certain kind of cosmology — one in which the low-entropy universe we see is a statistical fluctuation around an equilibrium state of maximal entropy.”

    The paradox is based on a false assumption and does not represent a reductio ad absurdum, and the universe starting as a random fluctuation is more likely than a brains popping into existance. If the above statement were true, than we should not be surprised if the initial geometry of the universe looked exactly like a Rolex watch.

  46. Sean says:

    Not Required (#45) — the article actually does mention the arrow of time, in the middle of the first page. The first appearance of the Boltzmann’s Brain concept was in the context of asking whether we could explain our arrow of time by imagining that the low-entropy early universe was just a thermal fluctuation from an equilibrium state. But since then, it’s been borrowed by discussions of eternal inflation, where the arrow of time is not the primary concern, but both universes and much smaller structures are formed. One’s goal is typically to come up with a scenario in which most observers are ordinary ones, not freak ones.

  47. Hal S says:

    I want to say if any of my earlier comments offended anyone I apologize.

    I have just read the following;
    http://preposterousuniverse.com/talks/time-colloq-07/time-colloq-07.pdf

    I understand where the issue of BB’s is arising, but I think that some of the discussion is poorly phrased. I stand by my previous statements; I do see the silliness of the Boltzmann cosmology, but BB’s are not the best tool to demonstrate silliness I think :-)

    I agree with the implied ideas of the briefing, but would add the following:

    1) The implication is that inflation in the early universe is in fact the residual expansion of an earlier universe. That inflation (expansion) is halted when sufficient energy has been “pulled’ out of the vacuum to create sufficient mass-energy to stabilize the expansion. But how much is enough?

    2) My earlier statements:
    a) We will never lose all the information of a previous state
    b) If we could every completely causaly disconnect ourselves from our past states, we would effectively be back at our initial state;

    suggest that as the universe expands, we will lose most of the information about our previous past, but not all. That residual information should represent the fundamental mathematics of the “multiverse”. At that point our “maximal entropy state” is equivalent to a new “minimal entropy state”.

    3) One of the better definitions of entropy, as I recall, is that it is a measure of how much energy in a CLOSED system, is unavailable for useful work. That means it is a relative measure internal to the system.

    4) If our universe is not truly closed, and depends on the injection of energy at the beginning and the extraction of that energy at the end, then the next question is whether our multiverse is a closed system. If the answer is no, than maximal entropy has no real meaning, if the answer is yes, then there is a maximal entropy.

    5) If the answer is yes, than maximal entropy is achieved when the multiverse reaches its maximal state. A natural maximal state could only be achieved if we define what a maximal energy universe is and what a minimal energy universe is.

    6) If we define our max and min energy universes, than it is conceiveable to calculate every possible configuration for every possible universe that could exist. Under this guise we could postulate a multiverse that is a static “crystal” of every possible configuration of every universe at every moment of its existance.

    7) If this multiverse “crystal” is static, that implies that it has one microstate, if it has one microstate, then under Boltzmann’s law its entropy would be zero.

    8) So the interesting question is whether the entropy of the multiverse is equal to zero or infinity? If we choose zero we again come back to the question of initial conditions and why the minimal and maximal energy for universes are what they are; if we choose infinity, we still have the question, but we are no longer hampered by a finite structure.

    9) It seems plausible that our initial conditions are the result of a correlation or dependency with the final conditions of a previous universe in a section of the multiverse. However, at some point, we have to assume either the multiverse had a starting point, or it has always existed.

    10) If we assume some starting point zero; where nothing existed, we can show that it is unstable by applying a naive form of the incompleteness theorem. A state can not be both complete and consistent. Or, nothing has no meaning without context, so to define a state of nothing is an attempt to define a complete and consistent state, which is not allowed.

    11) A multiverse that has always existed, that is infinite, is never complete and therefore can be consistent with itself.

    We should find infinities to be reassuring things and not scary things. We shouldn’t find it surprising that our physical bodies have to exist in stable spaces, and we shouldn’t be surprised that stable spaces exist in infinite ones.

    I apologize again if this is a little over the top, but cosmology appears to be touching on these questions and my two cents are probably worth as much as anyone else’s

    Thanks for the forum

  48. Hal S says:

    On Loschmidt’s paradox:

    The problem with Loschmidt’s paradox is that it fails to take into account the uncertainty principle in the assumption that the laws of motion are time reversible.

    In a collision between atoms, let’s imagine that two atoms collide and scatter elastically. In that interaction we initially see the electrons of the respective atoms in some configuration surrounding the nucleus. After the collision, those electrons reconfigure into some new arrangement based on the new momentums of the atoms in question.

    Now in some regards, this new arrangement represents a record of the collision and its outcome. If we lived in deterministic world, when we reversed time and watched the collision in reverse, we would expect to see the new arrangement of electrons return to the old arrangement after the atoms recollide (this is the flaw in Loschmidt’s paradox).

    In reality there is always a chance that an electron will randomly change its orbit while moving backward in time before we see the atoms recollide, thereby upsetting the record of the original “time forward” collision, and thus producing a new result as we move backward in time.

    At an individual particle level, it would be very difficult to predict when and if a collision between two particles would be affected by the random transition of an electron, but as we notch our scale up and include more and more particles, the net effect of random changes in electron orbits can not be ignored.

    Thus Boltzmann was largely correct in the conception of molecular chaos. The motion of single particle undergoing several collisions will eventually become uncorrelated with earlier collisions due to random transitions of its electron configuration.

    However, once again, we shouldn’t expect all of the correlations to ever completely vanish, and thus there will always be some non-zero record of initial conditions and intervening states.

  49. Lawrence B. Crowell says:

    I posted something here, which disappeared. A copy of it exists at:

    http://blogs.discovermagazine.com/cosmicvariance/2008/01/14/arxiv-find-what-is-the-entropy-of-the-universe

    on Jan 17th, 2008 at 3:49 pm. There I describe some aspects of thermodynamics of spacetime.

    The idea that Boltzmann’s “molecular chaos” gives a fluctuation which popped the universe out of some equilibrium state is flawed. The problem is that it first assumes that there can exist a configuration of all possible quantum states, including quantum gravity, that exist in equilibrium. Read my above discussion to see why this is questionable. The only thing comparable to an equilibrium in cosmology is the conformal infinity of the Anti-deSitter AdS spacetime. This is a Minkowsi spacetime — a complete void of nothingness. This may be the final state of the universe. The initial state is some set of unitary inequivalent vacua which are unstable, or which are destroyed by the inflaton field. The universe may then be an unstable void connected to a stable void, with a path integral over all field configurations in between.

    I tend to reject the multiverse concept. To my mind it suffers from the same problem as a fluctuation view of things discussed here. In other words all possible fluctuations in some “dead stew” gives rise to all types of configurations, where one of those in our universe. This is similar to the landscape concept, and frankly this explains nothing. In it is a problem with string theory that with its 10^{500} possible vacua explains “everything,” which in a curious sense means it explains nothing. Saying that all possible worlds exist and we are just in one of them is a sleight of hand, whether argued from Boltzmann or from strings. Don’t get me wrong, I am not quite in the Loop Quantum Gravity camp either, and I take both strings and loops as model systems which both might have some validity. Looking into a room through keyholes on different doors give different perspectives on a piece of the same room. I think frankly that the grand Feynman path integral of the universe contains states over a range of geometric configurations, but that they are selected out from having “classicality,” and their amplitudes are decoherently reduced to zero (epsilon) with inflation. I think this selection is due to the structure of elementary particles which give conformal completeness on the AdS to give an explicit structure to particles and gauge fields.

    Lawrence B. Crowell

  50. Hal S says:

    In response to Lawrence B Crowell

    I think the disconnect is that equilibrium only has meaning in a closed finite system. If our universe resides in an infinite open space, then we can abandon notions of equilibrium of that larger space altogether, which is a pretty exciting idea.

    I would agree that it answers nothing if there is no way to communicate information between our universe and that larger space, however I think cosmology is providing hints that our universe can communicate with that larger space in subtle ways, and there is information about that larger space encoded in our universe in way that is accessible.

    Another note regarding Loschmidt’s paradox:

    The general idea is that an atom is, in some way, similar to a computer which has the capacity to process, reject, accept and store information. It also has the shortcomings of a computer’s storage device in that information will become corrupted over time. Reversing the process will also cause corruption, but at any time we are left with the ability to tell that the corruption occured.

    One way to think about it is to pretend that you have never seen a deck of cards and have no idea what cards belong in the deck. Now suppose I gave you a deck, but before I did I removed 1 card at random; When I gave you the deck I told you it was a complete deck of cards. How do you prove I am lying?

    Now suppose I gave you just one card at random and told you it was a complete deck? How do you prove I am lying?

    Now suppose I give you just a random piece of a random card and told you it was the complete deck?

    In the first case you should be able to sort the cards and determine that one of the cards is missing. In the second case, you could probably determine that one card does not represent a complete set; you might not know what that set is, but you wouldn’t be surprised if you found out there were more cards.

    In the third case, again, you could probably determine that you don’t have a complete card, and you might also conclude that if you don’t have a complete card, then you might not have a complete deck, and once again you wouldn’t be surprised if you found out there were more cards.

    In all three cases though, by carefullying analyzing what you have, you can come to the conclusion that it has been corrupted. The question now becomes how to determine the level of corruption. Careful analysis of what you have many not reveal this, and in this example, the easiest thing would be to take what you do have and either look for a complete deck, or communicate with other people.

    This is where cosmology becomes very important. In what ways can we communicate outside our universe? What evidence exists that this is even possible? Entropy seems to be a good place to start, since we know that it is strongly related to the boundaries of our current universe.

    Go Boltzmann!