Boltzmann’s Universe

Boltzmann’s Brain CV readers, ahead of the curve as usual, are well aware of the notion of Boltzmann’s Brains — see e.g. here, here, and even the original paper here. Now Dennis Overbye has brought the idea to the hoi polloi by way of the New York Times. It’s a good article, but I wanted to emphasize something Dennis says quite explicitly, but (from experience) I know that people tend to jump right past in their enthusiasm:

Nobody in the field believes that this is the way things really work, however.

The point about Boltzmann’s Brains is not that they are a fascinating prediction of an exciting new picture of the multiverse. On the contrary, the point is that they constitute a reductio ad absurdum that is meant to show the silliness of a certain kind of cosmology — one in which the low-entropy universe we see is a statistical fluctuation around an equilibrium state of maximal entropy. According to this argument, in such a universe you would see every kind of statistical fluctuation, and small fluctuations in entropy would be enormously more frequent than large fluctuations. Our universe is a very large fluctuation (see previous post!) but a single brain would only require a relatively small fluctuation. In the set of all such fluctuations, some brains would be embedded in universes like ours, but an enormously larger number would be all by themselves. This theory, therefore, predicts that a typical conscious observer is overwhelmingly likely to be such a brain. But we (or at least I, not sure about you) are not individual Boltzmann brains. So the prediction has been falsified, and that kind of theory is not true. (For arguments along these lines, see papers by Dyson, Kleban, and Susskind, or Albrecht and Sorbo.)

I tend to find this kind of argument fairly persuasive. But the bit about “a typical observer” does raise red flags. In fact, folks like Hartle and Srednicki have explicitly argued that the assumption of our own “typicality” is completely unwarranted. Imagine, they say, two theories of life in the universe, which are basically indistinguishable, except that in one theory there is no life on Jupiter and in the other theory the Jovian atmosphere is inhabited by six trillion intelligent floating Saganite organisms.

In the second theory, a “typical” intelligent observer in the Solar System is a Jovian, not a human. But I’m a human. Have we therefore ruled out this theory? Pretty clearly not. Hartle and Srednicki conclude that it’s incorrect to imagine that we are necessarily typical; we are who we observe ourselves to be, and any theory of the universe that is compatible with observers like ourselves is just as good as any other such theory.

This is an interesting perspective, and the argument is ongoing. But it’s important to recognize that there is a much stronger argument against the idea that Boltzmann’s Brains were originally invented to counter — that our universe is just a statistical fluctuation around an equilibrium background. We might call this the “Boltzmann’s Universe” argument.

Here’s how it goes. Forget that we are “typical” or any such thing. Take for granted that we are exactly who we are — in other words, that the macrostate of the universe is exactly what it appears to be, with all the stars and galaxies etc. By the “macrostate of the universe,” we mean everything we can observe about it, but not the precise position and momentum of every atom and photon. Now, you might be tempted to think that you reliably know something about the past history of our local universe — your first kiss, the French Revolution, the formation of the cosmic microwave background, etc. But you don’t really know those things — you reconstruct them from your records and memories right here and now, using some basic rules of thumb and your belief in certain laws of physics.

The point is that, within this hypothetical thermal equilibrium universe from which we are purportedly a fluctuation, there are many fluctuations that reach exactly this macrostate — one with a hundred billion galaxies, a Solar System just like ours, and a person just like you with exactly the memories you have. And in the hugely overwhelming majority of them, all of your memories and reconstructions of the past are false. In almost every fluctuation that creates universes like the ones we see, both the past and the future have a higher entropy than the present — downward fluctuations in entropy are unlikely, and the larger the fluctuation the more unlikely it is, so the vast majority of fluctuations to any particular low-entropy configuration never go lower than that.

Therefore, this hypothesis — that our universe, complete with all of our records and memories, is a thermal fluctuation around a thermal equilibrium state — makes a very strong prediction: that our past is nothing like what we reconstruct it to be, but rather that all of our memories and records are simply statistical flukes created by an unlikely conspiracy of random motions. In this view, the photograph you see before you used to be yellow and wrinkled, and before that was just a dispersed collection of dust, before miraculously forming itself out of the chaos.

Note that this scenario makes no assumptions about our typicality — it assumes, to the contrary, that we are exactly who we (presently) perceive ourselves to be, no more and no less. But in this scenario, we have absolutely no right to trust any of our memories or reconstructions of the past; they are all just a mirage. And the assumptions that we make to derive that conclusion are exactly the assumptions we really do make to do conventional statistical mechanics! Boltzmann taught us long ago that it’s possible for heat to flow from cold objects to hot ones, or for cream to spontaneously segregate itself away from a surrounding cup of coffee — it’s just very unlikely. But when we say “unlikely” we have in mind some measure on the space of possibilities. And it’s exactly that assumed measure that would lead us to conclude, in this crazy fluctuation-world, that all of our notions of the past are chimeric.

Now, just like Boltzmann’s Brain, nobody believes this is true. In fact, you can’t believe it’s true, by any right. All of the logic you used to tell that story, and all of your ideas about the laws of physics, depend on your ability to reliably reconstruct the past. This scenario, in other words, is cognitively unstable; useful as a rebuke to the original hypothesis, but not something that can stand on its own.

So what are we to conclude? That our observed universe is not a statistical fluctuation around a thermal equilibrium state. That’s very important to know, but doesn’t pin down the truth. If the universe is eternal, and has a maximum value for its entropy, then we it would (almost always) be in thermal equilibrium. Therefore, either it’s not eternal, or there is no state of maximum entropy. I personally believe the latter, but there’s plenty of work to be done before we have any of this pinned down.

This entry was posted in Science, Time. Bookmark the permalink.

100 Responses to Boltzmann’s Universe

  1. Pingback: Chrononautic Log 改 » Blog Archive » And who was I talking to about Boltzmann brains?

  2. Pieter Kok says:

    Very interesting!

    Just one pedantic point, though: hoi polloi means “the people”, and hoi is the article. You should therefore write either “to the polloi” or “to hoi polloi”, but not “to the hoi polloi”.

    Like or Dislike: Thumb up 1 Thumb down 1

  3. lylebot says:

    This theory, therefore, predicts that a typical conscious observer is overwhelmingly likely to be such a brain. But we (or at least I, not sure about you) are not individual Boltzmann brains. So the prediction has been falsified, and that kind of theory is not true.

    I know this isn’t the point of your post, so apologies for nitpicking, but I guess it seems to me that this argument is ignoring a great deal of information that would allow us to conclude that we are not Boltzmann brains despite Boltzmann brains being common. For example, conditioning on the facts that we were born of mothers and fathers that have brains, and that in principle the genes that govern brain formation can be identified, and that we can see the brain developing from fetus to adult, it seems incredibly unlikely that we could be Boltzmann brains.

    Like or Dislike: Thumb up 0 Thumb down 0

  4. my one cent says:

    Forgive me if this has been covered in some previous post, and perhaps there is a subtlety in this argument that I am missing, but this sort of picture/analogy doesn’t sit well with me…

    Let’s throw out “brains” and “universes” for a minute and just imagione a single star. Say this star formed from an intially homogeneous universe filled with hydrogen gas. Now, that star could have just been formed by a random quantum fluctuation a few seconds ago that arranged all the hydrogen atoms correctly, but that is extremely unlikely. Instead it would be much easier for a quantum fluctuation to create a small overdensity in one location, that became unstable, drew in matter from around it and eventually formed a star. Thus in this case it is not neccesarily true that the past of the star is more likely to be a mirage than not.

    A similar case could be made for life and brains. Creating a random brain in space is unlikely. Withouut doing any math, it seems to me that it could easily be more likely that a brain appears by first having a smalll overdensity in one location, that forms a solar system (okay, maybe a few stars form before to make the other elements), which has a planet that evolves life, etc.

    In either case, it seems to me that the past is less likely to be a mirage than not, and that random brains are less likely than real brains. I therefore think this particular way of describiing issues with entropy in multiverse/anthropic universes may be apt to create unneccesary confusion. Perhaps there are better anaolgies would be ones where the total entropy is easier to quantify and compare, or maybe I’m just easily confused?

    Like or Dislike: Thumb up 3 Thumb down 0

  5. Ken says:

    As an observational researcher, sometimes the theory side of the work makes my head spin. I can only assume that is a consequence of the differing ways theorists and observationalists go about solving a puzzle. In the short term, it sure will be nice when LHC goes online to start providing data which can be used as ammunition in these debates!

    Like or Dislike: Thumb up 0 Thumb down 0

  6. Sean says:

    lylebot, this is basically the point of the post — if the universe is a fluctuation around thermal equilibrium, then no matter what you condition on concerning our present state (including literally everything we know about it), it is overwhelmingly likely that it is a random fluctuation from a higher-entropy past. Even if we have memories apparently to the contrary!

    Think of it this way: consider a half-melted ice cube in a glass of water. We think that it’s much more likely that five minutes ago it was a completely unmelted cube, rather than a homogeneous glass of water out of which the half-melted cube spontaneously arose. However, that’s only because we know we are nowhere near thermal equilibrium. If the glass were a closed system that lasted forever — i.e., much longer than the Poincare recurrence time — then we will much more often find spontaneous half-melted cubes then ones that arose “normally” from (lower-entropy) unmelted ones.

    my one cent, a similar argument applies to your question. It’s utterly unfair, given the hypothesis of a universe in thermal equilibrium, to start with a homogeneous gas — in a theory with gravity, that’s a dramatically low-entropy state!

    All of these arguments are simply the same arguments that usually imply that entropy will increase to the future, except run to the past — which is the wrong thing to do in the usual picture where we assume a low-entropy past boundary condition, but absolutely the right thing to do if the universe is in thermal equilibrium.

    Like or Dislike: Thumb up 0 Thumb down 0

  7. Jeff Harvey says:

    “When you break an egg and scramble it you are doing cosmology,” said Sean Carroll, a cosmologist at the California Institute of Technology.

    When I break an egg and scramble it I’m making breakfast. I guess that is
    the difference between cosmologists and particle physicists.

    Like or Dislike: Thumb up 0 Thumb down 0

  8. Peter Woit says:

    Ken,

    “it sure will be nice when LHC goes online to start providing data which can be used as ammunition in these debates!”

    No data that comes out of the LHC (or out of anywhere else) will have anything to do with this debate. That’s why most serious scientists see this as, to quote Overbye, “further evidence that cosmologists… have finally lost their minds.”

    Like or Dislike: Thumb up 0 Thumb down 0

  9. kelley elkins says:

    This is all very left brain… rational , logical and within the confines of time. So, to anyone with half the facility of a right brain, ie, creativity, emotions, the arts, …this is totally without much merit.
    We could even say ” left brain words are at best an honest lie”… and yet we garble on in hopes for some recognition or approval. which in turn means we’ve said nothing. In the right brain none of this matters, because the right brain can create universes faster than the left brain can de-construct them.
    However, this is all very well written brain salad, complete with entropy and thermal this and that. I recommend “Dynamics of Time and Space” by Tarthang Tulku, 1994, Dharma Publishing. After all, there is nothing outside of ourselves. We are continuously making it all up and validating ourselves and each other as we do it. Until we are willing to go inside and see/feel/know the “creative process” we haven’t much to say.
    And about the time all this is figured out it will be recognized that it has changed or evaporated…it is much like locating the edge of an electron when in fact the electron really isn’t there. Or is it? And if it is there, where did it come from and where did it go? It quickly becomes a left brain chicken and egg..or if you prefer, Schrodinger’s cat.
    Truly, lots of fun and certainly not to be taken seriously.

    Like or Dislike: Thumb up 0 Thumb down 0

  10. George Musser says:

    Sean, what do you think of the quote from Bousso to the effect that very low probability events can be discounted altogether?
    George

    Like or Dislike: Thumb up 0 Thumb down 0

  11. Chemicalscum says:

    Sean you said:

    If the universe is eternal, and has a maximum value for its entropy, then we it would (almost always) be in thermal equilibrium.

    Sorry for being a dumb chemist, but would it not be the case that if the maximum value for entropy was asymptotically approached at infinity starting from a low entropy boundary condition then the universe would never be in thermal equilibrium ?

    Like or Dislike: Thumb up 0 Thumb down 0

  12. Mike says:

    Sean, you repeat an argument I’ve heard you give before: if we assume the universe is a random fluctuation, then it’s far more likely that the universe just formed, and that all historical evidence suggesting otherwise is coincidental, while it’s far less likely that the universe followed the history it appears to have.

    However, I wonder if something is missing from this argument. First, as I understand it, the entropy counting follows traditional entropy definitions, which count the number of accessible states at a given energy. The larger entropy configuration is more likely because there are a larger number of possible states. But this conclusion is based on the states being considered ‘equivalent’. Yet all universes with the entropy of the present universe are NOT equivalent. For example, the overwhelming majority of these will not conspire to present a false history.

    Furthermore, I don’t think there’s a one-to-one mapping between present states and early universe states. In some sense, information is ‘created’ as the universe evolves. That is, (I think) a fully specified quantum state for the very early universe can result in a large number of fully specified quantum states for later universes. These later universes will have galaxies and planets and maybe intelligent life, but details need not be like the details of our universe; however all will present a sensible history to any observers. What I think this means is, counting states for the early universe somehow “undercounts” since the possible “histories” far exceeds the number of initial states. Add this to the above observation that counting states for the present universe by far “overcounts,” since most of these states don’t conspire to present a sensible history, and I think the comparison between the likelihood of these possibilities is not as straightforward as it at first seems.

    Like or Dislike: Thumb up 0 Thumb down 0

  13. Neil B. says:

    Just curious, umm, how does anyone “get off the ground” what’s supposed to be existing anyway (and that’s not even a clear strictly logical concept) whether it’s “just this” or the “multiverse”? I mean, the relative population of various universes, what the laws about the chance of laws are etc, what in the world, so to speak, are you going on? I know, that’s not strictly the problem as presented, but I gather that does affect how one is going to start thinking about it. (i.e., I assume the argument isn’t just simply about what to expect about statistical fluctuation given the universe/laws as is/known, correct me if wrong.)

    Like or Dislike: Thumb up 0 Thumb down 0

  14. Pingback: Not Even Wrong » Blog Archive » Have Cosmologists Lost Their Brains?

  15. Moshe says:

    George, Raphael is quoted as saying “anytime your measure predicts that something we see has extremely small probability, you can throw it out”. I think the “it” refers to the probability measure, not the event. For what it’s worth, I am slightly uncomfortable with the methodology expressed in that sentence…There is also the independently interesting issue of what to make of exponentially small probabilities, and whether or not the concept of probabilities makes sense for them.

    Like or Dislike: Thumb up 0 Thumb down 0

  16. John Merryman says:

    When you scramble an egg, its ordered state goes from present to past, so the arrow of time for the order points to the past, while the raw energy(protein) goes toward some future state. Now the assumption seems to be that the universe is an egg in the process of being scrambled, so yes, its present order is passing, but what if, rather then multiple universes to cover all the probabilities, we have an infinite universe where various fluctuations, such as the fork and the egg, are constantly coming together and creating new forms out of the same energy? Rather then a narrative unit, going from singularity to fadeout, it is endless cycles of energy going into the future, as order goes into the past.

    Like or Dislike: Thumb up 0 Thumb down 0

  17. MedallionOfFerret says:

    “After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the nonexistence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it — ‘I refute it thus.’”

    –Boswell, Life of Samuel Johnson

    Hoi polloi physics.

    Like or Dislike: Thumb up 0 Thumb down 0

  18. Sean says:

    George — I agree with Moshe. Bousso is saying that a theory that predicts that the universe we currently observe is a low-probability event (compared to some other set of universes), that theory is no good. Makes sense to me, although you have to be careful about how you compare.

    Chemicalscum — Since entropy comes from coarse graining, you wouldn’t just asymptotically approach the maximum, you would actually get there. And then, following Boltzmann, you would occasionally fluctuate to lower-entropy states.

    Like or Dislike: Thumb up 0 Thumb down 0

  19. George Musser says:

    Sean (and Moshe), in that case, I’m left wondering what to think about the double-exponentials in your arrow-of-time explanation.
    George

    Like or Dislike: Thumb up 0 Thumb down 0

  20. Sean says:

    Mike — I am assuming, as you say, a uniform distribution on the space of microstates compatible with our current macrostate. But I think this is simply what is predicted in a model with an eternal universe cycling through its state space ala Poincare. Furthermore, it’s certainly what we use when we do ordinary future-directed stat mech, so I wouldn’t want to abandon it without good reason.

    For the second point, I suspect you are being a temporal chauvinist. In principle it is just as likely/unlikely for the early/late universe to have one/many different quantum states. (And again, an isolated finite quantum system would generally sample all of the possibilities subject to appropriate constraints.) Unless you honestly want to violate unitarity, which is okay, but you would have to be pretty explicit about how that would work.

    Like or Dislike: Thumb up 0 Thumb down 0

  21. Sean says:

    George redux — the point is how low-probability a certain event is compared to some other low-probability event. Any theory, including mine, has the burden of showing that “ordinary” observers (conditionalized over some appropriate set of features) are more likely (“less low-probability”) than isolated “freak” observers. That would be, to put it mildly, work in progress.

    Like or Dislike: Thumb up 0 Thumb down 0

  22. jpd says:

    hi,
    regarding the glass of water analogy,
    comment #6, aren’t you assuming the temperature
    of the glass is above freezing?

    if the temperature was lower than 0,
    upon seeing a half melted ice cube,
    i would expect more ice in the glass at
    times earlier and later than my observation

    Like or Dislike: Thumb up 0 Thumb down 0

  23. jpd says:

    thats 0 celsius of course, sorry about that

    Like or Dislike: Thumb up 0 Thumb down 0

  24. Eugene says:

    Actually I think Bousso’s “it” refers to the event, not the measure. Because if he meant the measure, than he is wrong.

    If your measure predicts something with very low probability, that does not say that your measure is bad. In this probability business, measures are just constructs that you invent in order to sieve through your theories, and they are not theories themselves. For example, I can construct a measure (and I have!), and according to my measure Theory A (high probability) is more likely to produce a universe like ours than Theory B (low probability). I can’t use the probability that I compute for Theory A to throw out my measure.

    Like or Dislike: Thumb up 0 Thumb down 0

  25. Eugene says:

    Uhh, I meant “Theory B” in the last sentence.

    That was a response to Sean, Moshe etc.

    Like or Dislike: Thumb up 0 Thumb down 0

  26. Moshe says:

    Eugene, this is precisely what makes me uncomfortable about Raphael’s sentence, but as a matter of simple grammar (and looking at the context) the “it” in that sentence definitely refers to the measure- if your favorite measure has produced nonsensical results, you are instructed to use a different measure.

    Like or Dislike: Thumb up 0 Thumb down 0

  27. Moshe says:

    BTW, the topic that George raised is really interesting, I vaguely remember reading about various issues that arise if you take the concept of probability seriously for arbitrarily low probabilities (or equivalently for arbitrarily large ensembles). It wouldn’t surprise me that concepts that rely on ensembles being strictly infinite (as opposed to “practically infinite”, namely finite but really large compared to anything else in the problem) can lead to paradoxical results.

    Like or Dislike: Thumb up 0 Thumb down 0

  28. Raphael Bousso says:

    Just to clarify: By “it”, I did mean the measure. Ultimately, I would argue that the measure is an integral part of the theory, but for now it is useful to think of them separately. Then Eugene is right: A better statement would be to say that if my theory together with my measure predict that something we do observe has probability very close to 0, then either my theory or my measure are probably wrong. This is called ruling out a theory (or a measure) and it’s no different from what we always do in science, except in that we include the measure in the set of theoretical assumptions that could be wrong. (We say this more clearly in our recent paper, so I will not repeat it here.)

    It is not uncommon, however, to find situations in which a prediction depends extremely sensitively on the choice of measure, but is robust over a wide class of theories (the youngness paradox and the Boltzmann brain problem are examples). Such situations can be useful as we try to weed out measures and find the right one. I suspect that most of us feel that we know far less about the correct choice of measure than about plausible dynamical theories. When a chain of arguments leads to false predictions, we should first question the weakest link in the chain, which in my judgement is probably the measure. This is the context of the sentence that was quoted in Overbye’s article.

    Like or Dislike: Thumb up 0 Thumb down 0

  29. san says:

    Is the probability of Boltzmann brains quantified? Or do you mean probabilities in the logical sense. Using the assumptions that time is infinite, that configurations exist that create universes and brains, therefore eventually all universes and brains will exist. But, just because time is infinite, and configurations that can give rise to universes and brains exist now doesn’t mean they will keep existing. These configurations could disappear forever.

    Like or Dislike: Thumb up 0 Thumb down 0

  30. Low Math, Meekly Interacting says:

    I’m wondering if the practice of contemplating an infinity of something doesn’t scare people off nearly as well as it should.

    Like or Dislike: Thumb up 0 Thumb down 0

  31. Cynthia says:

    Dennis Overbye writes,

    “A different, but perhaps related, form of antigravity, glibly dubbed dark energy, seems to be running the universe now, and that is the culprit responsible for the Boltzmann brains.”

    Then he goes goes to write,

    “With no more acceleration [or cosmological constant] there would be no horizon with its snap, crack and pop, and thus no material for fluctuations and Boltzmann brains.”

    Correct me if I’m wrong, but both of these passages seem to be saying that Dark Energy is the Brains behind behind Boltzmann brains…

    Like or Dislike: Thumb up 0 Thumb down 0

  32. jeff says:

    And of course, these boltzman brains assume that the functionalist view of mind is valid. Last time I checked, we still weren’t sure exactly what consciousness is.

    Like or Dislike: Thumb up 0 Thumb down 0

  33. Claire says:

    Could be Boltzmann’s Branes not Brains.

    Sean said, “if the universe is a fluctuation around thermal equilibrium, then no matter what you condition on concerning our present state (including literally everything we know about it), it is overwhelmingly likely that it is a random fluctuation from a higher-entropy past.”

    What if the higher-entropy past of 2 or more trillion universes, say that they are both/all achieved by fluctuation around each and their own thermal equilibrium, are

    1) a closed past and
    2) (could be at the same time) are with an open past?

    Here I was referring to the half frozen ice in a glass idea Sean said, just wondered what shape our past universe could be in comparison to others and would this not affect the comment Sean says,

    “Even if we have memories apparently to the contrary?”.

    It’s interesting how it fits with the state towards the individual thermal equilibriums. If the processes of any fluctuations around thermal equilibriums are different and do join, does the type of observation used not affect the linearness of both or more probability functions being calculated at that time (like grand vector reduction on a cosmic scale)? And was wondering here if configurations would introduce branes worlds as well as brain worlds Boltzmann style. If the probability method we use creates a hypothetical guy in another brane, who happens to find us using their own probability method, would that event not mean we are using the same distribution method but in an odd way?

    Mind you, the guys in the other universes may have already discovered a better way to discover us via a better probability method but we still might not be here according to them, well not yet anyway and maybe that’s what happening to us. Maybe we are figment of their imaginations.

    Sean continues “Even if we have memories apparently to the contrary!”

    Important.

    Claire

    Like or Dislike: Thumb up 0 Thumb down 0

  34. Yahoo says:

    Just a quick comment on Sean’s final paragraph: “Either it’s not eternal, or there is no state of maximum entropy”. True, but: note that even if the universe is not eternal, this doesn’t help in itself. For example, if the universe is created from nothing, it is still most likely to be created at or close to equilibrium. On the other hand, even if there is no state of maximum entropy, one still has to explain why the entropy is so small *now* —- it should be in a high-entropy state [evolving eternally to still higher entropy states]. So the eternal/non-eternal question is not relevant in itself, though of course it will be important in constructing an explanation of the current smallness of the entropy.
    [Not suggesting that Sean intended anything else, just commenting.]

    Like or Dislike: Thumb up 0 Thumb down 0

  35. Pingback: Not Again « The truth makes me fret.

  36. Paul Valletta says:

    The issue of “Why can’t you unscramble an egg”, is factored by the fact, the person who actually randomly invented scrambled eggs, bares a lot of responsibility for the Universe’s evolution? Eggs, in their natural evolutionary format, would NEVER become scrambled, thus the fact scrambled eggs can happen is based on an “unatural” cause/event? left to a natural path, the egg would not become scrambled, it takes an intervention by a conscious observer..chef…gourmet or an accident prone basic cook to produce scrambled eggs!

    Thus, the person who intentionally, or accidentally invented scrambled eggs, is responsible for some of the most profound and deep questions relating to the Universe and it’s scrambled egg content, without scrambled eggs the Universe would not be as “eggsciting” to some as it is to others, of course it could be that ordinary eggs, are really “unscrambled, scrambled eggs” , so the question is really this:Why is it you CAN scramble eggs?

    Q:What came first the chaotic boltzmann chicken, or the low-entropic unscrambled egg?

    Come to think of it..who put them on the cosmic menu!

    Like or Dislike: Thumb up 0 Thumb down 0

  37. godma says:

    Okay, so the most likely case at any moment is that our memories are fictitious. I get that.

    But then how can we trust our measurements/calculations of the change in entropy? We can’t because it requires trusting memories.

    Like or Dislike: Thumb up 0 Thumb down 0

  38. Eugene says:

    Raphael (and Moshe),

    Thanks for the response and clarification! I do agree with you that ultimately if we want to make sense of this whole business, the measure has to be in some way part of the whole “theory” although I really have very little intuition about how that is going to be. The problem with the separate theory/measure picture is that, as the article implies, we have no objective way of deciding which measure is the correct one.

    Eugene

    Like or Dislike: Thumb up 0 Thumb down 0

  39. Eugene says:

    I hasten to add that while we don’t have any inkling about which measure is the “correct” one, I think we should be able to write down a set of rules of which any measure-builder has to follow (such as gauge-invariance etc). I don’t think we know all these rules yet. My personal feeling is that thinking about these rules are the way to go, and perhaps can lead us to fundamental theories themselves.

    Like or Dislike: Thumb up 0 Thumb down 0

  40. Pingback: Meeting:Thursday January 17 « The Dark Blog

  41. Hal S says:

    The Boltzmann Brain paradox is based on a false assumption.

    The proof can be found on Wikipedia under the entry for Gibbs Entropy:

    “An overestimation of entropy will occur if all correlations and more generally if statistical dependence between the state probabilities are ignored.”

    The evolution of a human brain is very statistically dependent, and in fact the vacuum fluctuation that is required to produce a human brain with all its memories, etc, will be of the same order of magnitude, or greater, than one that would produce the contemporary universe.

    We can not escape the fact that all observers in our universe have interacted and co-evolved with the entire universe, therefore the information represented by a brain includes all the relevant information about our universe. For instance, it includes information about nucleosynthesis, evolution, physical constants, physics, math, etc.

    The statistically dependent nature of our normal existance means that for us to be here now, a chain of events must have occured, and must have occured in a unique ordered fashion. It is far less probable that a disembodied brain, or a chair, or a watch would pop into existance, than the entire current universe exactly as it is; simply because such a random object would have less entropy and represent a higher state of order if it appeared at random without sitting in its appropriate place in a universe.

    With this logic we can postulate the following:

    Any fluctuation sufficient enough to replicate the highly ordered information in a human mind would have to be large enough to replicate the highly ordered information in the entire connected universe.

    Any fluctuation that would replicate a human mind in the past (where things are more ordered) would have to be greater than one that would capture a human mind in the present.

    With respect to time, we can make the following argument:

    Suppose we envision a universe that jumps from state to state based on useable energy? That in order to jump to a new state I have to “use” energy? Suppose we also say that the energy is “used” to store information of the present state? We have observed in our everyday lives that information is lost in a storage medium as time progresses; from this observation we may conclude that we lose information about earlier states as we progress from state to state, however, we also observe that not all the information is ever completely lost, and we might also conclude we will never lose all the information of a previous state. From this perspective, a jump into a previous state would represent an increase of available energy since stored information would be redundant with the new information provided.

    In other words; we can not go back in time because we can’t get rid of information that we have retained about the past, so we can never go back to a previous state because the previous state can not match the present state. Any state we moved into that “looked like” a previous state, is in fact a completely new “future” state that is a facsimile of the previous state (because the only way we could make the comparison with a previous state is if we retained information on that previous state).

    If we could go back, we would basically be “resetting the clock” and we would have no recollection of the future states, as any recollection would mean that you are not in a past state, but in a future state.

    To sum up the time argument:

    We will never lose all the information of a previous state.

    The only way we could make the comparison with a previous state is if we retained information on that previous state.

    Any movement into the past would require a complete loss of information about “future” states, otherwise you are not moving into a truly past state, only one that is a facsimile state in the future.

    If we could every completely causaly disconnect ourselves from our past states, we would effectively be back at our initial state

    To conclude again: Boltzmann Brain paradox is akin to the watchmaker argument; the fundamental flaw is in the initial assumption. Or rather, theologians often naively assume that information encoded in the structure of a wristwatch is less than that encoded by the Earth, when in fact it is of the same order or larger.

    In the Boltzmann Brain paradox, people assume that the information encoded in a human brain is less than that of the universe, when they in fact are of the same order.

    Like or Dislike: Thumb up 0 Thumb down 0

  42. Pingback: Boltzmann Brains « QED

  43. Not Required says:

    I have to confess that I don’t understand the relationship between what Sean said and the Overbye article. Sean says that BBs show that the arrow of time cannot be explained by means of a fluctuation. Fine. But what has that got to do with the ideas of the other people cited in the article? They don’t seem to be talking about the AOT at all! Furthermore, if the BB problem really is a problem, then it arises in plain old deSitter space and doesn’t have anything to do with the multiverse — if you wait long enough in ordinary de Sitter space you will see strange things too. Would Sean be so kind as to explain the links between all these apparently disparate things?

    Like or Dislike: Thumb up 0 Thumb down 0

  44. Peter Ashby says:

    Please forgive a mere biologist from intruding but I find the ice cube in a glass analogy fallacious for the simple reason that water accreting together to freeze does not form melting ice cube structures. So the fact that the cube is melting demonstrates an arrow of time and thus refutes that it has formed probabilistically.

    As for the historicity argument wrt scrambled eggs it gets better, scrambled eggs include milk or milk products so you need not just ovoviparous reptile descendants but lactating mammals too.

    Like or Dislike: Thumb up 0 Thumb down 0

  45. Hal S says:

    “The point about Boltzmann’s Brains is not that they are a fascinating prediction of an exciting new picture of the multiverse. On the contrary, the point is that they constitute a reductio ad absurdum that is meant to show the silliness of a certain kind of cosmology — one in which the low-entropy universe we see is a statistical fluctuation around an equilibrium state of maximal entropy.”

    The paradox is based on a false assumption and does not represent a reductio ad absurdum, and the universe starting as a random fluctuation is more likely than a brains popping into existance. If the above statement were true, than we should not be surprised if the initial geometry of the universe looked exactly like a Rolex watch.

    Like or Dislike: Thumb up 0 Thumb down 0

  46. Sean says:

    Not Required (#45) — the article actually does mention the arrow of time, in the middle of the first page. The first appearance of the Boltzmann’s Brain concept was in the context of asking whether we could explain our arrow of time by imagining that the low-entropy early universe was just a thermal fluctuation from an equilibrium state. But since then, it’s been borrowed by discussions of eternal inflation, where the arrow of time is not the primary concern, but both universes and much smaller structures are formed. One’s goal is typically to come up with a scenario in which most observers are ordinary ones, not freak ones.

    Like or Dislike: Thumb up 0 Thumb down 0

  47. Hal S says:

    I want to say if any of my earlier comments offended anyone I apologize.

    I have just read the following;
    http://preposterousuniverse.com/talks/time-colloq-07/time-colloq-07.pdf

    I understand where the issue of BB’s is arising, but I think that some of the discussion is poorly phrased. I stand by my previous statements; I do see the silliness of the Boltzmann cosmology, but BB’s are not the best tool to demonstrate silliness I think :-)

    I agree with the implied ideas of the briefing, but would add the following:

    1) The implication is that inflation in the early universe is in fact the residual expansion of an earlier universe. That inflation (expansion) is halted when sufficient energy has been “pulled’ out of the vacuum to create sufficient mass-energy to stabilize the expansion. But how much is enough?

    2) My earlier statements:
    a) We will never lose all the information of a previous state
    b) If we could every completely causaly disconnect ourselves from our past states, we would effectively be back at our initial state;

    suggest that as the universe expands, we will lose most of the information about our previous past, but not all. That residual information should represent the fundamental mathematics of the “multiverse”. At that point our “maximal entropy state” is equivalent to a new “minimal entropy state”.

    3) One of the better definitions of entropy, as I recall, is that it is a measure of how much energy in a CLOSED system, is unavailable for useful work. That means it is a relative measure internal to the system.

    4) If our universe is not truly closed, and depends on the injection of energy at the beginning and the extraction of that energy at the end, then the next question is whether our multiverse is a closed system. If the answer is no, than maximal entropy has no real meaning, if the answer is yes, then there is a maximal entropy.

    5) If the answer is yes, than maximal entropy is achieved when the multiverse reaches its maximal state. A natural maximal state could only be achieved if we define what a maximal energy universe is and what a minimal energy universe is.

    6) If we define our max and min energy universes, than it is conceiveable to calculate every possible configuration for every possible universe that could exist. Under this guise we could postulate a multiverse that is a static “crystal” of every possible configuration of every universe at every moment of its existance.

    7) If this multiverse “crystal” is static, that implies that it has one microstate, if it has one microstate, then under Boltzmann’s law its entropy would be zero.

    8) So the interesting question is whether the entropy of the multiverse is equal to zero or infinity? If we choose zero we again come back to the question of initial conditions and why the minimal and maximal energy for universes are what they are; if we choose infinity, we still have the question, but we are no longer hampered by a finite structure.

    9) It seems plausible that our initial conditions are the result of a correlation or dependency with the final conditions of a previous universe in a section of the multiverse. However, at some point, we have to assume either the multiverse had a starting point, or it has always existed.

    10) If we assume some starting point zero; where nothing existed, we can show that it is unstable by applying a naive form of the incompleteness theorem. A state can not be both complete and consistent. Or, nothing has no meaning without context, so to define a state of nothing is an attempt to define a complete and consistent state, which is not allowed.

    11) A multiverse that has always existed, that is infinite, is never complete and therefore can be consistent with itself.

    We should find infinities to be reassuring things and not scary things. We shouldn’t find it surprising that our physical bodies have to exist in stable spaces, and we shouldn’t be surprised that stable spaces exist in infinite ones.

    I apologize again if this is a little over the top, but cosmology appears to be touching on these questions and my two cents are probably worth as much as anyone else’s

    Thanks for the forum

    Like or Dislike: Thumb up 0 Thumb down 0

  48. Hal S says:

    On Loschmidt’s paradox:

    The problem with Loschmidt’s paradox is that it fails to take into account the uncertainty principle in the assumption that the laws of motion are time reversible.

    In a collision between atoms, let’s imagine that two atoms collide and scatter elastically. In that interaction we initially see the electrons of the respective atoms in some configuration surrounding the nucleus. After the collision, those electrons reconfigure into some new arrangement based on the new momentums of the atoms in question.

    Now in some regards, this new arrangement represents a record of the collision and its outcome. If we lived in deterministic world, when we reversed time and watched the collision in reverse, we would expect to see the new arrangement of electrons return to the old arrangement after the atoms recollide (this is the flaw in Loschmidt’s paradox).

    In reality there is always a chance that an electron will randomly change its orbit while moving backward in time before we see the atoms recollide, thereby upsetting the record of the original “time forward” collision, and thus producing a new result as we move backward in time.

    At an individual particle level, it would be very difficult to predict when and if a collision between two particles would be affected by the random transition of an electron, but as we notch our scale up and include more and more particles, the net effect of random changes in electron orbits can not be ignored.

    Thus Boltzmann was largely correct in the conception of molecular chaos. The motion of single particle undergoing several collisions will eventually become uncorrelated with earlier collisions due to random transitions of its electron configuration.

    However, once again, we shouldn’t expect all of the correlations to ever completely vanish, and thus there will always be some non-zero record of initial conditions and intervening states.

    Like or Dislike: Thumb up 0 Thumb down 0

  49. Lawrence B. Crowell says:

    I posted something here, which disappeared. A copy of it exists at:

    http://blogs.discovermagazine.com/cosmicvariance/2008/01/14/arxiv-find-what-is-the-entropy-of-the-universe

    on Jan 17th, 2008 at 3:49 pm. There I describe some aspects of thermodynamics of spacetime.

    The idea that Boltzmann’s “molecular chaos” gives a fluctuation which popped the universe out of some equilibrium state is flawed. The problem is that it first assumes that there can exist a configuration of all possible quantum states, including quantum gravity, that exist in equilibrium. Read my above discussion to see why this is questionable. The only thing comparable to an equilibrium in cosmology is the conformal infinity of the Anti-deSitter AdS spacetime. This is a Minkowsi spacetime — a complete void of nothingness. This may be the final state of the universe. The initial state is some set of unitary inequivalent vacua which are unstable, or which are destroyed by the inflaton field. The universe may then be an unstable void connected to a stable void, with a path integral over all field configurations in between.

    I tend to reject the multiverse concept. To my mind it suffers from the same problem as a fluctuation view of things discussed here. In other words all possible fluctuations in some “dead stew” gives rise to all types of configurations, where one of those in our universe. This is similar to the landscape concept, and frankly this explains nothing. In it is a problem with string theory that with its 10^{500} possible vacua explains “everything,” which in a curious sense means it explains nothing. Saying that all possible worlds exist and we are just in one of them is a sleight of hand, whether argued from Boltzmann or from strings. Don’t get me wrong, I am not quite in the Loop Quantum Gravity camp either, and I take both strings and loops as model systems which both might have some validity. Looking into a room through keyholes on different doors give different perspectives on a piece of the same room. I think frankly that the grand Feynman path integral of the universe contains states over a range of geometric configurations, but that they are selected out from having “classicality,” and their amplitudes are decoherently reduced to zero (epsilon) with inflation. I think this selection is due to the structure of elementary particles which give conformal completeness on the AdS to give an explicit structure to particles and gauge fields.

    Lawrence B. Crowell

    Like or Dislike: Thumb up 0 Thumb down 0

  50. Hal S says:

    In response to Lawrence B Crowell

    I think the disconnect is that equilibrium only has meaning in a closed finite system. If our universe resides in an infinite open space, then we can abandon notions of equilibrium of that larger space altogether, which is a pretty exciting idea.

    I would agree that it answers nothing if there is no way to communicate information between our universe and that larger space, however I think cosmology is providing hints that our universe can communicate with that larger space in subtle ways, and there is information about that larger space encoded in our universe in way that is accessible.

    Another note regarding Loschmidt’s paradox:

    The general idea is that an atom is, in some way, similar to a computer which has the capacity to process, reject, accept and store information. It also has the shortcomings of a computer’s storage device in that information will become corrupted over time. Reversing the process will also cause corruption, but at any time we are left with the ability to tell that the corruption occured.

    One way to think about it is to pretend that you have never seen a deck of cards and have no idea what cards belong in the deck. Now suppose I gave you a deck, but before I did I removed 1 card at random; When I gave you the deck I told you it was a complete deck of cards. How do you prove I am lying?

    Now suppose I gave you just one card at random and told you it was a complete deck? How do you prove I am lying?

    Now suppose I give you just a random piece of a random card and told you it was the complete deck?

    In the first case you should be able to sort the cards and determine that one of the cards is missing. In the second case, you could probably determine that one card does not represent a complete set; you might not know what that set is, but you wouldn’t be surprised if you found out there were more cards.

    In the third case, again, you could probably determine that you don’t have a complete card, and you might also conclude that if you don’t have a complete card, then you might not have a complete deck, and once again you wouldn’t be surprised if you found out there were more cards.

    In all three cases though, by carefullying analyzing what you have, you can come to the conclusion that it has been corrupted. The question now becomes how to determine the level of corruption. Careful analysis of what you have many not reveal this, and in this example, the easiest thing would be to take what you do have and either look for a complete deck, or communicate with other people.

    This is where cosmology becomes very important. In what ways can we communicate outside our universe? What evidence exists that this is even possible? Entropy seems to be a good place to start, since we know that it is strongly related to the boundaries of our current universe.

    Go Boltzmann!

    Like or Dislike: Thumb up 0 Thumb down 0

  51. Sho Kuwamoto says:

    Interesting discussion!

    Before this set of articles, I’d never encountered the Boltzmann’s Brain thought experiment, which seems to put the nail in the coffin for theories that postulate that the initial state of the universe can be explained as a statistical fluctuation.

    For those who want another stab at explaining this line of thinking, I wrote a short article here.

    That having been said, I think the argument is not 100% definitive.

    If the hypothesis is that there is a larger “world out there” which is literally a world that follows the laws of physics as we know it, and that this world is in thermal equilibrium and that the universe as we know it is literally a thermal fluctuation of stuff within this gigantic larger world… well… the BB argument wins.

    But the hypothesis, I think, is that none of us knows what the “world out there” looks like, and no one knows exactly the laws under which it operates. As we fumble around trying to guess at what it might be like, could we postulate that there is some larger world of stuff that follows certain laws that are similar to ours, and that there are fluctuations (quantum? statistical?) within this larger world?

    And depending on the laws of this larger world out there, perhaps even small fluctuations could be greatly magnified into large systems with low entropy, such as the state of our universe at the time of the big bang.

    Of course, once you get into this realm of thinking, it’s hard to call it science. More like metaphysics or pseudoscience.

    Like or Dislike: Thumb up 0 Thumb down 0

  52. Hal S says:

    I was wondering if anyone has ever attempted to derive an equation based on the following:

    If we accept that the universe is expanding and accelerating, what happens when that expansion impacts the maximal entropy event horizon of a black hole? Suppose we took the position that a black hole represents a seed mass for a new universe and the mass of the new universe is not equal to that black hole mass but proportional or a function of that seed mass?

    In this scenario the expansion of the old universe when it encounters the event horizon causes inflation of a new universe, which can not be stopped by the mass of the black hole, but only after a sufficient amount of new mass and energy is created out of the vaccuum.

    Is there an equation that would tell us the mass of the new universe based on the seed mass of the black hole?

    Any comments would be appreciated.

    Like or Dislike: Thumb up 0 Thumb down 0

  53. Lawrence B. Crowell says:

    Hal S on Jan 17th, 2008 at 8:54 pm
    In response to Lawrence B Crowell

    I think the disconnect is that equilibrium only has meaning in a closed finite system. If our universe resides in an infinite open space, then we can abandon notions of equilibrium of that larger space altogether, which is a pretty exciting idea.

    ———————

    General relativity is based on the Lorentz group on local regions. The group has 3 ordinary rotations plus 3 boosts. The boosts are hyperbolic instead of elliptical or “circular,” or more precisesly compact. A symmetry which is compact is guaranteed to return on itself, so to speak. For instance a set of rotations about some angle @ and another about a some angle @’ are when both repeated in an infinite series guaranteed to converge — A Cauchy convergence condition. This is a closure condition. A noncompact group, such as that which underlies relativity, will in many cases fail to converge as such. A hyperbola approaches and asymptote, instead of closing up. The “infinite” convergence points means a group theoretic closure is not possible. The group theoretic structure of gravitation fails to obey this sort of closure. In a fundamental sense this is why gravitation fails to “close up,” which is a reason it has a strange form of thermodynamics.

    A hydrogen atom might be considered to be a computer, even naively we might think that by putting the electron in a superposition of an infinite number of states as possible by Rydberg’s formula a hydrogen atom could be an infinte Turing machine or quantum computer. There is one problem called the Bekenstein bound. There is an upper limit on the amount of quantum information you can in any system with a size or surface area A. As A goes down the amount of information that could be put on the system decreases by a convex function.

    I am not a big fan of anthropic principles. I will say I think the weak anthropic principle is useful as a sort of guiding question. A century ago the age of the Earth was known to be in the hundreds of millions of years. Ideas about solar light and energy involved gravitational contraction, which predicted a much shorter life for the sun. So geology, evolution and the fact we emerged from this jostled physics of the day to find a better theory. Along came quantum mechanics, nuclear physics and Hans Bethe who came up with the answer. Similarly the fine tuning issue and anthropic ideas are likely a similar question posed to us. The Polchinski-Bousso (2000) worked on how the cosmological constant / was due to a large bare term which is corrected by oscillator terms associated with a D7-brane dual to the D4-brane in the ‘bulk.” The exact / came about from a specialized condition or a sort of transversality of p-forms on these branes. Nice, and particularly nice since it raises a big question in the form of an anthropic implication.

    The Boltzmann brain idea has to be taken in light of biology. What really matters on this planet are prokaryotes or bacteria. All the rest is fluff, and even plants are just photosynthetic machines meant to grab energy for bacteria. Eukaryotic cells (those with nuclei, organelles etc) evolved from ancient associations of prokaryotic cells, and all the rest of life we see ordinarily are just energy generating machines that are food for bacteria. This includes us. Prokarotic communities, which can involve a wide range of species, appear to exist in large webs that extend around the planet. They really run this place, not us. So to juxtapose Boltzmann’s brain against the great Bard

    “Life’s but like a walking shadow, a poor player that struts and frets upon the stage, and then is heard no more; It’s a tale told by an idiot, filled with sound and fury, signifying nothing.” —- MacBeth, Shakespeare

    The idea that the universe is a set of fluctuations that bring about a brain with various perceptions or internal conscious ideations, but with nothing else “out there” seems to pose more of a question, rather than being any idea which should be taken seriously. It is a sort of anthropic realization, which are entertaining over a scotch and cigar.

    Finally, human beings should not take themselves that seriously. We have too much history of that, and it all seems to lead to the same eating of dust. Everyone should ponder what it is that we actually manage to accomplish here, even in our personal lives. You might find that everything you have ever made, bought, borrowed or stole ends up in the landfill. We humans at the end of the day appear to be little more than a terminator species turning everything we can get our hand on into trash.

    Lawrence B. Crowell

    Like or Dislike: Thumb up 0 Thumb down 0

  54. Hal S says:

    Lawrence B. Crowell

    “General relativity is based on the Lorentz group on local regions. The group has 3 ordinary rotations plus 3 boosts. The boosts are hyperbolic instead of elliptical or “circular,” or more precisesly compact. A symmetry which is compact is guaranteed to return on itself, so to speak. For instance a set of rotations about some angle @ and another about a some angle @’ are when both repeated in an infinite series guaranteed to converge — A Cauchy convergence condition. This is a closure condition. A noncompact group, such as that which underlies relativity, will in many cases fail to converge as such. A hyperbola approaches and asymptote, instead of closing up. The “infinite” convergence points means a group theoretic closure is not possible. The group theoretic structure of gravitation fails to obey this sort of closure. In a fundamental sense this is why gravitation fails to “close up,” which is a reason it has a strange form of thermodynamics.”

    I have a suspicion that we aren’t to far off our thinking on this, however, current understanding is that a massive object can not reach the speed of light traveling in a straight line. This is absolutely true, and I do not dispute that.

    However, there shouldn’t be any problem for a massive object to reach the speed of light if it follows a continuous closed curve.

    I base this on the following logic:

    As a massive object approaches the speed of light, its effective acceleration in the direction of travel approaches zero. However, there is nothing prohibiting an acceleration in a perpendicular direction.

    An acceleration perpendicular to the direction of travel will cause a change in velocity perpendicular to the orginal direction of travel.

    In this case, you have two component velocity vectors, both less than the speed of light, that combine to produce a velocity vector equal to the speed of light. Mass along the closed curve has now stabilized at a finite value, depending on the initial mass and velocity of the object when it was moving in a straight line.

    In this regard we can get relativity to close up.

    Like or Dislike: Thumb up 0 Thumb down 0

  55. Sean says:

    Guys — this is not the place for discussing your ideas about general relativity. Keep the comments short and on topic. We don’t have time to edit or negotiate, so we will just delete.

    Like or Dislike: Thumb up 0 Thumb down 0

  56. Albatross says:

    Although I don’t think that he had Botzmann’s Brain in mind when he wrote it, Steven Brust’s short novel, “To Reign in Hell postulates a universe created almost exactly under the conditions of Boltzmann’s Brain. The first being to spring into existence and not immediately redissolve into Chaos? Yahweh. Then he reached into the chaos and pulled out Lucifer, and the fun began…

    Like or Dislike: Thumb up 0 Thumb down 0

  57. Hal S says:

    Apologies, I didn’t realize I was doing anything wrong. I just wanted to help.

    Like or Dislike: Thumb up 0 Thumb down 0

  58. Lawrence B. Crowell says:

    Yahweh (Yod Hey Vov Hey) is an example of what Max Tegmark calls an observer with a “bird’s eye view” of the world. Of course there are theological quibbles here, for most religions regard God as outside the world or universe. The Be’raysheet (Genesis) story does have an interesting component to it, for light is separated from dark, dry land from sea, things that swim and things that fly and so forth. It reflects a cornerstone of Jewish thought of Kodesh or separation. So the face of God was upon the deep, here the waters signifying chaos or void, and He then imposed a dichotomy of distinct categories. Since God is a Tegmarkian bird’s eye observer He can do all of this on a fine grained scale without making a mess, or in other words generating entropy. Then of course there is the question I asked at an early age, “Where did God come from?” Boltzmann’s brain?

    I don’t think the universe is fundamentally thermodynamic. Thermodynamics is what might be called an effective theory. Penrose thinks that quantum state reductions are an “Objective Reduction,” which are fundamental. These then destroy information and quantum information and impose a fundamental time asymmetry to the universe. A fundamental time asymmetry to the universe implies that quantum information is lost. This means it is difficult to attach an endpoint to the cosmological path integral based on solid physics. The universe may well be “void to void,” with an initial point being a set of inequivalent vacua and the final point the AdS conformal infinity, an empty M^4. Everything in between is just a holographic way that these two nothingness voids are connected together. Existence is just nothingess rearranged, or maybe what we see locally as “something” is just a way that nothingness is rearranging itself from an unstable nothingness to a stable nothingness along this illusion we call time.

    But then is that a sort of fluctuation in an equilibrium bath? If you have a universe that globally is an equilibrium bath, there is no time. Time is something measured by a clock, which is a heat engine that requires a free energy source. In a grand world of equilibrium there is no clock, so operationally maybe there is no time. So everything is nothingess, Jean Paul Sartre is laughing, and if God is Boltzmann’s brain then it must go back into the soup of maximal entropy, Nietzsche declared “God is Dead.”

    Lawrence B. Crowell

    Like or Dislike: Thumb up 0 Thumb down 0

  59. go back to 4 says:

    “My one cent” (comment #4) summarized my thoughts on this. Creating a brain (or anything complex) from essentially uniform nothing is much harder (less likely) than allowing more common physical processes to act over long times. Indeed the notion that time+ordinary events produces extraordinary outcomes is the basis of the theory of evolution, our best way of understanding how brains came about.

    Like or Dislike: Thumb up 0 Thumb down 0

  60. Hal S says:

    Sean

    “Guys — this is not the place for discussing your ideas about general relativity. Keep the comments short and on topic. We don’t have time to edit or negotiate, so we will just delete.”

    Please don’t delete this last comment, I want to make it and then I’ll let it go.

    Boltzmann’s Brains and other ideas challenge our notion of probability and statistics, things that are intimately related to quantum mechanics.

    Various authors make statements about how we should view the universe as a field of finite volumed points.

    When it comes to the speed of light and mass, what is the difference between jumping from point to point in a straight line and jumping from point to point in a circle?

    The distance traveled moving point to point along line segments approximating a circle is less than the circumference of that circle.

    Under these circumstances, a massive object moving in a circle could appear to be moving at the speed of light, when in reality it isn’t.

    The mass should then be equivalent to the relativistic mass related to the speed of the object moving along the line segments.

    Very respectfully,

    Hal S.

    Like or Dislike: Thumb up 0 Thumb down 0

  61. Lawrence B. Crowell says:

    go back to 4 on Jan 18th, 2008 at 5:25 pm
    “My one cent” (comment #4) summarized my thoughts on this.
    ————

    Exactly. This is the old argument by creationists that natural science says that life and humans came about spontaneously from clouds of hydrogen. No, it didn’t happen that way. Life, brains and iPODS came about through a long protracted process and not some spontaneous assembly of bits.

    Lawrence B. Crowell

    Like or Dislike: Thumb up 0 Thumb down 0

  62. Hal S says:

    Lawrence B. Crowell (#61)

    I agree with your statement, I get confused sometimes about how people interpret the anthropic principle.

    I tend to like the idea of infinities simply because a natural infinity is not an argument for the creationist view, but an argument against it. We should expect conditions in our natural world to be such that we don’t require some intelligent agent in order to get things started.

    Nothingness should be an unattainable state. It would be a state that would be both consistent and complete, which should not be possible. Nothingness should require an agent to keep stable; which contradicts our concept of nothingness.

    Like or Dislike: Thumb up 0 Thumb down 0

  63. Neil B. says:

    Hal, nothingness certainly wouldn’t require an agent to keep stable, after all it would have no process of time and therefore have to be stable since it couldn’t change. I think that for a universe to just be around with certain properties “and not others we can imagine” (as alternative self-consistent choices) is an absurd existential loose end flapping around. You can believe in many universes then, but where does that end? Is it modal realism, does it even include cartoon universes and things much weirder than Boltzmann Brains? Is there anything like “the chance” of the fine structure constant being certain values, etc? What are the laws behind the laws?

    Like or Dislike: Thumb up 0 Thumb down 0

  64. Hal S says:

    Neil (#63)

    I am trying to understand Godel’s incompleteness theorem, and I still have a very naive view; and with my naive interpretation, it seems that a state of nothingness must be self consistent, i.e. without external context. It must also be complete in that if it were incomplete, then the whole state collapses.

    I just can’t escape the thought that nothingness needs context.

    I guess my current thinking is that we continue to take on a more mathematical view of the universe. In the context of sets, how do we discern sets of things that can exist and sets of things that can’t.

    In a sense, things which we define as not real (cartoons and such) do have an existance on “the surface” of our perceived reality. Is our ability to imagine these things constrained? What is the set of things we can not imagine?

    To get back on track, “What are the laws behind the laws?” I think is the real question. I find the use of the term “anthropic principle” has been misused enough to really have no strong meaning anymore.

    I think if it is used purely as a statement like, “In our natural state we must live on Earth and not on Jupiter because we evolved here on Earth; and this affects our view of things.”; then its perfectly okay. But I don’t even know if that is the correct interpretation anymore, and if it is, then there are a lot of bad interpretations out there.

    When I think of a “multiverse” I tend to think along those lines. I think if it does exist, then there is a definite structure to it; and I think that structure should be discernible in our present universe (and if it doesn’t exist that’s fine too, I have no strong preference to live in any particular universe, just as long as it looks a lot like the one I’m in)

    I don’t believe in ghosts or goblins or any of the other fairy like things people want to exist in a “multiverse”. I don’t believe that you can derive the “laws of morality” from the laws of physics. I do believe that discernable physical laws in our universe should be similar or identical to the ones in the “multiverse”.

    I am fascinated by the fact that square pegs don’t fit in round holes, that we can’t square the circle, and that we live in a universe with straight lines, ellipses and hyperbolas. I think that indicates what kind of “laws behind the laws” might govern things.

    Like or Dislike: Thumb up 0 Thumb down 0

  65. Hal S says:

    I think I figured #60 out.

    It seems that the answer is that the object in question would occasionally have to jump to points outside the apparent circular path, this would give it the opportunity to “self correct” the distance traveled along the segments.

    I think that would make the path look like a tube and not a circle…spaghetti anyone?

    Like or Dislike: Thumb up 0 Thumb down 0

  66. Lawrence B. Crowell says:

    I think Neil B. indicates a problem with the whole many universe idea. In order to make our world probable you need to drag in all sorts of other worlds in an ensemble or set so that ours is somehow inevitable. I really think we can do better than that.

    Godel’s theorem rests upon Cantor’s diagonalization of Godel numbers and showing that no formal system as defined by Church’s Lambda calculus or the Russell-Whitehead Principia is able to enumerate all of its Godel numbers. So any axiomatic system that is sufficiently powerful will contain sets which it is unable to enumerate, or any axiomatic system as non-RE (recursively enumerable) statements, theorems etc in them. These are also theorems which effectively state their own unprovability as predicates that act on their own Godel numbers. Godel showed this by demonstrating there existed solutions to Diophantine equations which could not be computed. Cohen and Bernays demonstrated that the continuum hypothesis was consistent in Zermalo Fraenkel set theory as an example of Godel’s theorem.

    Does this have anything to do with physics? It might, but serious caution is in order. Penrose threw out some ideas about Godel’s theorem and quantum state reductions, quantum gravity and even consciousness. It basically flopped. Underlying physics at the Planck scale things might indeed by a chaos of quantum states which are self-referential. For instance some quantum states have Diophantine representations, and by Godel’s original argument maybe some quantum systems have states which exist, but are not dynamically computable because their quantum information content is self referential. Maybe? Chaitan argues that axiomatic mathematics, the math we prove, know and use is the result of self-referential accidents. Maybe by the same line of thought physical law which is understood mathematically is also an accident. Of course we have to ask, “What do we even mean by law?” In some ways these are human constructions.

    So does Godel’s thoerem have anything to do with physics or cosmology? Well if so it might be at the “end of physics,” at the Planck scale where everything might come to some end. Of course this is all highly speculative.

    Lawrence B. Crowell

    Like or Dislike: Thumb up 0 Thumb down 0

  67. Qubit says:

    Its still Pinocchio.

    Like or Dislike: Thumb up 0 Thumb down 0

  68. The idea of macroscopic statistical fluctuations is fascinating, if bizarre. Here’s a smaller-scale version of the Boltzmann’s brain argument that’s applicable to a single room.

    Suppose you have some small room that is completely closed off from the rest of the universe. No energy or matter gets in or out. The walls of the room are built to last a gazillion years (or even 10^10^10^gazillion). Before you close it off, you place into the room all the ingredients for a human being: Carbon, Hydrogen, Oxygen, trace minerals, etc. Now just wait.

    Eventually (we’re talking a long time to wait) the material will assemble itself into a human being. Call him Random. Random will then age just like a normal human being. However, there is a big difference between Random and an ordinary human. With an ordinary human, whatever age that person is, you can bet he was younger in the past. With our macroscopic-fluctuation-produced human, that assumption is not warranted at all. The processes that lead to aging are driven by entropy, and they are time-symmetric. So the same argument that would lead us to conclude that Random will look older and more decrepit in 40 years will just as validly lead us to conclude that Random already looked older and more decrepit 40 years ago. In other words, the chances are overwhelming that Random is the youngest he ever was.

    It’s very rare that living humans would appear in the room, but it is much more rare that you would ever see children or babies.

    What this shows is that our notions of “common sense” are intrinsically bound up with the idea that the universe has not been around forever, and that entropy was much lower in the distant past than it is now.

    Like or Dislike: Thumb up 0 Thumb down 0

  69. Lawrence B. Crowell says:

    Of course these thought experiments indicate that something is wrong with the whole theory. Thermodynamics is what might be called an effective theory. It works well in a proper domain of application and observation. It even works with black holes. But appealing to fluctuation theory to understand how the universe came about, something similar to the Boltzmann brain, appears inoperative. Then reducing this down to a fluctuation which gives rise to a brain that has sensations of an existing universe, but where these might be an illusion as well as memories is a sort of solipcism.

    The universe is a path integral over a set of configurations, which under a Wick rotation is a partition function in thermodynamics. So there is a connection here. Yet the start of the path integral is some vacuum state, or set of vacua, that are unstable, and the end point is a Minkowski spacetime with maximal entropy and the simplest vacuum configuration. This is the conformal infinity of the Anti-deSitter spacetime. Everything else is how states or quantum information connect up the start and final conformal infinity points.

    Lawrence B. Crowell

    Like or Dislike: Thumb up 0 Thumb down 0

  70. seriously says:

    “But appealing to fluctuation theory to understand how the universe came about, something similar to the Boltzmann brain, appears inoperative. Then reducing this down to a fluctuation which gives rise to a brain that has sensations of an existing universe, but where these might be an illusion as well as memories is a sort of solipsism.”

    The Boltzmann’s brains that are solipsists are actually the sane ones. But the overwhelming majority of Boltzmann’s brains will be completely insane. Appropriately, some fluctuations will give rise to entire asylums full of Boltzmann’s brains. Unfortunately, some of these asylums will be populated purely by sane Boltzmann’s brains (except for the staff).

    Like or Dislike: Thumb up 0 Thumb down 0

  71. Hal S says:

    Daryl

    please read entry 41 and 48

    Like or Dislike: Thumb up 0 Thumb down 0

  72. Hal S,

    Sorry, I don’t see how your arguments address what I wrote.

    Like or Dislike: Thumb up 0 Thumb down 0

  73. Lawrence B. Crowell writes: Of course these thought experiments indicate that something is wrong with the whole theory.

    How do they indicate that?

    Like or Dislike: Thumb up 0 Thumb down 0

  74. Hal S says:

    Daryl

    A human being will never self assemble in the scenario you propose. Ever.

    Like or Dislike: Thumb up 0 Thumb down 0

  75. Hal S says:

    just a hint

    http://en.wikipedia.org/wiki/Bose_condensate

    Like or Dislike: Thumb up 0 Thumb down 0

  76. Hal S.,

    I think everyone agrees that in principle it will never happen (it would require a time period on the order of Poincare recurrence time, an ungodly large amount of time, much, much longer than the lifetime of the universe). However, such an assembly is not impossible. It doesn’t violate any of the laws of physics. So I don’t know why you are saying it will never happen.

    Like or Dislike: Thumb up 0 Thumb down 0

  77. Hal S,

    Why do you think that Bose condensates are relevant here?

    Like or Dislike: Thumb up 0 Thumb down 0

  78. Hal S says:

    “No energy or matter gets in or out”

    and you provided no source of heat

    Like or Dislike: Thumb up 0 Thumb down 0

  79. Hal S says:

    Of course you could argue that when the some of the elements combine chemically they will produce heat, you then have a question of how much, and how big is your room.

    You also have a problem in that you are in an inertial frame of reference. If the temp does get high enough, all elements will gravitate into blobs.

    This leaves out the question of electrical currents that will be caused in any of the atoms become ionized. How do you prevent a capacitance from forming?

    Like or Dislike: Thumb up 0 Thumb down 0

  80. Hal S says:

    There will be an evolution to this process, and entropy will increase, but again as outlined in 48, you can never get rid of all the correlations, and there will be some finite record of this evolution

    Like or Dislike: Thumb up 0 Thumb down 0

  81. Hal S,

    I don’t believe that you are correct. The laws of physics don’t in any way say that there must remain a record of past evolution.

    Like or Dislike: Thumb up 0 Thumb down 0

  82. Here’s a more mathematically precise claim about what is possible from fluctuations.

    Let |Psi> and |Phi> be two different states of a quantum system involving many, many particles. Then the prediction of quantum mechanics is that the probability that the system, when put in state |Psi> at time 0 will be found in state |Phi> at a later time t is given by

    P = |A|^2

    where A = is the transition amplitude and where H is the Hamiltonian.

    The question is: under what circumstances do we expect the transition amplitude A to be zero for all values of t? Well, certainly A will always be zero if Phi and Psi have different eignvalues for conserved quantities. But if that’s not the case; if Phi and Psi have the same values for charge, total momentum, total energy, total angular momentum, etc., then I would think that A would be nonzero for most values of t.

    Like or Dislike: Thumb up 0 Thumb down 0

  83. Hal S,

    Please don’t post any more urls without explaining why you think that any of them are relevant. Why are any of those web pages relevant?

    Like or Dislike: Thumb up 0 Thumb down 0

  84. Hal S says:

    This is actually rather neat.

    We know that fundamental forces unify at higher energies, and we know the universe is expanding and energy density is decreasing. Thus we have a permanent record of the fact that the universe had a higher energy density early in its evolution

    Like or Dislike: Thumb up 0 Thumb down 0

  85. Somehow, my definition of the transition amplitude A disappeared. But it’s just the element: A(t) = .

    Like or Dislike: Thumb up 0 Thumb down 0

  86. Hal S says:

    All stable heavy elements must have been produced in large stars through nucleosynthesis, therefore we have permanent record that there was some large star that existed before our own solar system

    Like or Dislike: Thumb up 0 Thumb down 0

  87. Hal S,

    Your conclusion does not follow from that. The average energy density for the entire universe may be decreasing with time, but that doesn’t imply that local densities might not increase.

    Like or Dislike: Thumb up 0 Thumb down 0

  88. Hal S says:

    The universe evolves through the process of diffusion…some bound states are only producable at energy scales larger than what we see today

    Like or Dislike: Thumb up 0 Thumb down 0

  89. Hal S says:

    ie supermassive black holes and spiral galaxies

    Like or Dislike: Thumb up 0 Thumb down 0

  90. Sorry, I don’t see how any of your comments are relevant. To say that some reaction is only possible at such and such an energy scale is a probabilistic statement. What it means is that without that energy, the transition becomes much less likely. But the probability never goes to zero unless the transition violates a conserved quantity.

    Like or Dislike: Thumb up 0 Thumb down 0

  91. Qubit says:

    I can live my life backwards through time, while every one else lives forwards through time and we can all see the exact same history of the world. I dont think am real enough and am probably dead already because my life is improbable.

    What they dont tell you at school is that it is possable to produce an imaginary black hole in your mind, information on its own is enough to do this. This is called frame setting, it makes your life real and observed up to that point. While you live on like being born again; in other words you exist in every single point in the universe as a real object. Then you simply are a universe at that point. Everthing before it simply is your imagination, that has become real. You surround the universe like a dyson sphere; like around a star and then you take everthing you need to carry on you life as yourself and nothing more. Doing this is the same as an animal that has no concious ablity to understand that its giving birth.

    If I got something wrong then its your fault, because you used me to do it! You cant expect me to get “everything” right. Your frame is the one I set and unless you are “Your” then you can dismiss this as nonsense.
    Its not wrong anyway, its entropy is just too high at the begining to be right, it needs to be a lot lower. The universe is not real, its proably a simulation and that really does suck!

    Look left see blue “Think way of the mushroom!”, “Eyes forward!”, “A cat see’s with two eyes”, look right? “I cant remember?”

    Do it yourselves next time!

    If you dont understand this, then ignore it! And don’t comment!

    Like or Dislike: Thumb up 0 Thumb down 0

  92. Lawrence B. Crowell says:

    To discuss quantum fluctuations I am going to texify some here. I hope I make no errors. Often when I do this I have to repair errors I make in doing this, but there is no preview. So here goes.

    If you have a state psi(t) it will evolve into a state psi(t + &t) by the Schrodinger equation
    ihbarfrac{partialpsi(t)}{partial t}~=~Hpsi(t)
    where I will from now set hbar to one. The Hamiltonian H defines a unitary time development operator U(t) with
    |psi(t_0~+~t)rangle~=~e^{-iH(t_0~+~t)}|psi(t_0)rangle
    where the unitary operator is the exponential term on the right. We then consider the overlap of a state and its time development some small increment of time
    langlepsi(t)|psi(t~+~delta t)rangle~=~langlepsi(t)|e^{-iHdelta t}|psi(t)rangle
    which with the Taylor theorem gives
    langlepsi(t)|psi(t~+~delta t)rangle~simeq~langlepsi(t)|(1~-~iHdelta t~-~frac{1}{2}H^2delta t^2|psi(t)rangle
    This is the overlap between a state and its time development into the future. Now what we do is to take the modulus square of this to get
    |langlepsi(t)|psi(t~+~delta t)rangle|^2~simeq~(1~-~Big(langlepsi(t)|H^2|psi(t)~-~|langlepsi(t)|H|psirangle^2Big)delta t^2
    The last two terms on the right are
    (langle~H^2~rangle~-~langle~H~rangle^2)delta t^2,
    with some compression of notation. The term in the parenthesis is the square of
    Delta H~=~sqrt{langle~H^2~rangle~-~langle~H~rangle^2}
    Now this is the Heisenberg uncertainty principle for the whole package in the modulus squared is
    Delta Hdelta t/hbar~=~sqrt{langle~H^2~rangle~-~langle~H~rangle^2})delta t/hbar.
    This says that if I sample the system in a short time I will get a range of possible values for the energy of the system.

    As a digression this has some interesting properties, for this is the metric of the Fubini-Study space and the fibration over the projective Hilbert space.

    The quantum fluctuation is physically the result of sampling or measuring the system. Quantum mechanics is contrary to popular belief a completely deterministic physics. The Schr{“o}dinger wave equation is completely deterministic. But since the wave is complex valued, and there is this fibration given by the unitary operator over the projective Hilbert space what we measure in real variables has this range of possible outcomes.

    In thermodynamics there are also fluctuations, and a formula for them based on the deBroglie wavelength. It is analogous to this, but where one has to look at Fokker-Planck equations or Langevin processes. These are similar to quantum fluctuations, but are really different physics.

    Lawrence B. Crowell

    Like or Dislike: Thumb up 0 Thumb down 0

  93. Pingback: Starts With A Bang! » Brain-damaged arguments and Boltzmann Brains

  94. Qubit says:

    Maybe there is a killer virus that lurks in the universe, one that makes the odds of a boltzmann brain popping out of the ether at any point almost impossible. Viruses don’t have brains, but they do a great job of killing them and are a hell of a lot more likely to pop into existence than a brain. It’s not unconceivable that a type of virus removes all none real boltzman brains, leaving only evolved life forms possible. The nature of the universe is extremely violent and unforgiving; the universe simply has no time for things that are not real! Of course there is a chance boltzman brain can get lucky, but you simply just have to look at conception in humans to realise that once in the egg, no other sperm can get in; then the odd of twins are statistically possible, but in universe terms very unlikely.

    You could tell your self in the past this, if you found a way; but you would end up leading yourselves into an un-real universe, once you passed the point you sent the information to yourselves in the past, you would not know what to do next. Unless you told yourselves what to do and that would be like creating a big polo mint in space and driving through the hole! Where would that get you?

    Like or Dislike: Thumb up 0 Thumb down 0

  95. Pingback: Everyone's a Critic | Cosmic Variance

  96. Pingback: The lure of science pornography » Undress Me Robot

  97. Pingback: The Boltzmann Brain Controversy « In Other Words

  98. RedCharlie says:

    Anybody read Huw Price’s “Time’s Arrow and Archimedes’ Point”?

    To paraphrase Price, the mystery is not why entropy always increases but why entropy is always lower in the past. If we assume that physics is time symmetric, then not only can cream spontaneously jump out of a cup of coffee and pebbles leap out of ponds, but they do, regularly. To see this all one has to do is run the tape backwards.

    Price’s argument against Boltzman goes something like this:

    1) We are not now in thermal equilibrium.

    2) The statistical basis of the 2nd law dictates all matter moves toward thermal equilibrium.

    3) So, we have to explain why we are not in equilibrium now, in the present.

    4) Boltzman’s solution was to allow random fluctations from equilibrium, which is certainly possible, even inevitable, given enough time.

    5) So, maybe we are in a non-equilibrium state because of such a random fluctuation in a universe that is otherwise in thermal equilibrium (most of the time).

    6) The past, however, appears to us to have been even further from equilibrium, in fact, as one goes back in time, the further from equilibrium the (observable) universe becomes. I.e. entropy paradoxically decreases going back into time. If this doesn’t seem like a problem, please understand that we want a physics that is time-symmetric, i.e. has rules that work going forward or backward in time. So it’s no good to just say the 2nd law only applies going forward in time.

    7) If it is the case, however, that our current departure from thermal equilibrium is due to a random fluctuation, then it is quite likely that our past is “fake”, that is, it only appears to be lower in entropy (further from thermal equilibrium) than it is now. This is because the more a possible fluctuation moves away from equilibrium, the less likely it is to happen. Assuming our current state is due to such a fluctuation, it will always be a “cheaper” or a more likely solution, statistically, to say the past is fake than to say the past really was lower in entropy than now. If one admits that the past is fake, then the furthest departure from equilibrium one must account for is now. But the further you push the point of “reality” into the past, the further the explanatory fluctuation must have departed from equilibrium, and thus the less likely it is to have actually happened that way.

    This really isn’t to say that I believe our brains spontaneously appeared out of the muck a nanosecond ago. It’s really a reductio ad absurdum argument. It just shows that Boltzman’s proposed fluctuation is really not a satisfactory explanation of why the (observable) universe has a low entropy in the past.

    And I think Carroll is wrong about this too:
    “Note that this scenario makes no assumptions about our typicality — it assumes, to the contrary, that we are exactly who we (presently) perceive ourselves to be, no more and no less.”

    If statistically the past is more likely to be “fake” than “real” (i.e. only appearing to be lower in entropy than now), then why not the present also? It’s more likely that the monkeys only typed out the first page of Hamlet rather than waiting around to finally produce the whole thing. If just we popped out of the uniform muck a nanosecond ago, it’s more likely that what popped out was just an empty, ephemeral stage-set, and less likely to be the “real thing”, no?

    Like or Dislike: Thumb up 0 Thumb down 0