Richard Feynman on Boltzmann Brains

The Boltzmann Brain paradox is an argument against the idea that the universe around us, with its incredibly low-entropy early conditions and consequential arrow of time, is simply a statistical fluctuation within some eternal system that spends most of its time in thermal equilibrium. You can get a universe like ours that way, but you’re overwhelmingly more likely to get just a single galaxy, or a single planet, or even just a single brain — so the statistical-fluctuation idea seems to be ruled out by experiment. (With potentially profound consequences.)

The first invocation of an argument along these lines, as far as I know, came from Sir Arthur Eddington in 1931. But it’s a fairly straightforward argument, once you grant the assumptions (although there remain critics). So I’m sure that any number of people have thought along similar lines, without making a big deal about it.

One of those people, I just noticed, was Richard Feynman. At the end of his chapter on entropy in the Feynman Lectures on Physics, he ponders how to get an arrow of time in a universe governed by time-symmetric underlying laws.

So far as we know, all the fundamental laws of physics, such as Newton’s equations, are reversible. Then were does irreversibility come from? It comes from order going to disorder, but we do not understand this until we know the origin of the order. Why is it that the situations we find ourselves in every day are always out of equilibrium?

Feynman, following the same logic as Boltzmann, contemplates the possibility that we’re all just a statistical fluctuation.

One possible explanation is the following. Look again at our box of mixed white and black molecules. Now it is possible, if we wait long enough, by sheer, grossly improbable, but possible, accident, that the distribution of molecules gets to be mostly white on one side and mostly black on the other. After that, as time goes on and accidents continue, they get more mixed up again.

Thus one possible explanation of the high degree of order in the present-day world is that it is just a question of luck. Perhaps our universe happened to have had a fluctuation of some kind in the past, in which things got somewhat separated, and now they are running back together again. This kind of theory is not unsymmetrical, because we can ask what the separated gas looks like either a little in the future or a little in the past. In either case, we see a grey smear at the interface, because the molecules are mixing again. No matter which way we run time, the gas mixes. So this theory would say the irreversibility is just one of the accidents of life.

But, of course, it doesn’t really suffice as an explanation for the real universe in which we live, for the same reasons that Eddington gave — the Boltzmann Brain argument.

We would like to argue that this is not the case. Suppose we do not look at the whole box at once, but only at a piece of the box. Then, at a certain moment, suppose we discover a certain amount of order. In this little piece, white and black are separate. What should we deduce about the condition in places where we have not yet looked? If we really believe that the order arose from complete disorder by a fluctuation, we must surely take the most likely fluctuation which could produce it, and the most likely condition is not that the rest of it has also become disentangled! Therefore, from the hypothesis that the world is a fluctuation, all of the predictions are that if we look at a part of the world we have never seen before, we will find it mixed up, and not like the piece we just looked at. If our order were due to a fluctuation, we would not expect order anywhere but where we have just noticed it.

After pointing out that we do, in fact, see order (low entropy) in new places all the time, he goes on to emphasize the cosmological origin of the Second Law and the arrow of time:

We therefore conclude that the universe is not a fluctuation, and that the order is a memory of conditions when things started. This is not to say that we understand the logic of it. For some reason, the universe at one time had a very low entropy for its energy content, and since then the entropy has increased. So that is the way toward the future. That is the origin of all irreversibility, that is what makes the processes of growth and decay, that makes us remember the past and not the future, remember the things which are closer to that moment in history of the universe when the order was higher than now, and why we are not able to remember things where the disorder is higher than now, which we call the future.

And he closes by noting that our understanding of the early universe will have to improve before we can answer these questions.

This one-wayness is interrelated with the fact that the ratchet [a model irreversible system discussed earlier in the chapter] is part of the universe. It is part of the universe not only in the sense that it obeys the physical laws of the universe, but its one-way behavior is tied to the one-way behavior of the entire universe. It cannot be completely understood until the mystery of the beginnings of the history of the universe are reduced still further from speculation to scientific understanding.

We’re still working on that.

114 Comments

114 thoughts on “Richard Feynman on Boltzmann Brains”

  1. Sean:

    Scott Aaronson’s ‘axiom’ of the unique non-reusability of time (in comparison to “reusability” of the other space coordinates) is the kind of axiom to which I was alluding.

    If we begin with such an axiom, the Liouville measure as well as most of the foundations of physics will need some tweaking. Wouldn’t the reversibility of macro- and micro-physics then disappear, will BBs and an ‘origin of time’ – or a time of near-zero entropy then make any sense ?

  2. Sean, when I reason probabilistically I don’t just condition on the observations I’ve made of the macrostates of various systems, I include half a dozen supplementary working assumptions. One of those assumptions is that we genuinely do live just a few billion years after a very low entropy Big Bang, as opposed to living in some kind of random fluctuation which merely keeps imitating that situation.

    I’m not just saying that for the sake of argument. I assume this for roughly the same reason that I assume the laws of physics will continue to hold, and that I am not actually Descartes trapped inside his own mind being deceived by the devil (or any of the tedious modern variants of that scenario). Not only is it psychologically more pleasant to assume these things, the payoff is enormous: making these assumptions is what lets us do science, instead of getting mired down in intractable philosophical issues.

    Obviously I’m going to make the same predictions as you about melting ice cubes and so on. But you seem to be suggesting that there’s something inconsistent, or intellectually flawed in some way, about failing to stick to the formula “condition on observed macrostates, and assume a random microstate using the Liouville measure” for everything from ice cubes to cosmology. That’s what I don’t accept. Probabilistic reasoning is a rigourous tool for making the best of uncertainty, but what “making the best” actually means is a matter of context. In gambling, in public health, etc., “making the best” of the indeterminacy we have to deal with isn’t hard to define; of course there are different political values that can be brought to bear in public health matters, but at least we can point to outcomes that make some reasonably well-defined group better off on average.

    So my complaint boils down to this: who is actually better off by reasoning in the way you suggest, and ruling out (or strongly weighting against) cosmological models that would allow Boltzmann brains in the future? Can you point to some tangible group who gets the benefit of this approach for dealing with uncertainty — in the way you could persuade medical researchers or quality control engineers to adopt their normal statistical methodologies?

    Maybe you feel this is just the intellectually correct thing to do, but apart from a certain quality of elegance and simplicity in assuming “we are in a random microstate” to the greatest degree possible, I honestly don’t see what compels this view.

    And the downside, from my point of view, is that people are publishing papers in which they claim to have put bounds on the cosmological constant, or to have deduced the “necessity” of various unobserved features of fundamental physics, based solely on the fact that we’ve so far found ourselves not to be Boltzmann brains. How far would you be happy to see this trend go? I’m not being snarky, I’m honestly curious — should someone share a Nobel prize for guesstimating the cosmological constant this way, if a group of observational astronomers later get a value in the same ballpark after a few decades of painstaking work?

  3. There is something I do not understand regarding entropy. It seems that P(galactic cluster) is greatly increased if you consider the prior, that is P(galactic cluster|universe), and further that P(earth|galactic cluster) is increased, and so is P(brain|earth). Now while many cosmologists reason backwards from here, wondering what the nature of order is, I can’t help but reason forward as well, and I do not see thermal equilibrium. All current evidence suggests that the probabilistic expansion I have described continues, that is, P(something much smarter than a brain|brain) is increased, and that we will in fact see something much smarter than a brain. Suppose that thing develops here on earth. One theory holds that a much smarter intelligence would not bother here on earth, which was just a catalyst, but would colonize another area of the universe in order to capitalize on its resources. Now consider the reductio of this argument, where most matter is optimally used and much time has elapsed. To me this is not a cold, dead, dark probabilistic flatness, but quite the opposite. To me this sounds like an optimization processes, much like evolution, or any number of algorithms from computational learning theory. The end result that I see is that things like brains are mysterious for a reason not mentioned by Boltzmann: they are mysterious because they have the ability to take the matter in the universe and use it create something that can take the matter in the universe and so on.

    Please point out any flaws in this reasoning! In particular, how is it possible to consider the evolution of the universe as a decrease in order when, since the dawn of life, all signs point to the (apparent, to me, perhaps naively) opposite?

  4. Greg Egan — asking, “who benefits” is a strange way to do science. But if you want to do it that way: the benefit is that Sean’s approach forces us to try to *explain* the extremely non-generic character of the early universe, by deducing it from string theory or whatever. The drawback of Boltzmann’s explanation is that it doesn’t lead anywhere. OK, the entropy of the early universe was low because of a fluctuation, right, and that tells us…..what?

    Also, how would we rule out any theory if we allow appeals to extremely improbable events [such as *not* finding equilibrium when we look in new places]? “Yes, my theory predicts that there should be easily observed processes violating CPT invariance all over the place, but the thing is, every time that happens, just by my damned bad luck, a small black hole nucleates and swallows the particles, then it evaporates and nobody noticed, yes, I know that seems unlikely, but it’s not *impossible* right?…..”

    No, nobody should get the Nobel “simply” by following a well-known idea to its conclusion. They should get tenure though. On the other hand, someone who shows that string theory/loop quantum gravity/whatever leads unambiguously to low-entropy initial conditions, and that this entails a precise prediction of the amount of CMB non-gaussianity which is then confirmed to five sigma by observations, well, now you are talking…..

  5. Brian, evolution on Earth has involved some localised decreases in entropy, but only at the same time as a vastly greater increase in entropy in other systems.

    Ted Bunn has a nice recent analysis of this issue on his blog:

    http://blog.richmond.edu/physicsbunn/2008/12/07/entropy-and-evolution/

    http://blog.richmond.edu/physicsbunn/2008/12/11/more-on-evolution-and-entropy/

    If the universe becomes filled with complex lifeforms, that won’t in itself prevent it reaching thermal equilibrium eventually. As star die out, and other energy sources are exhausted, life will find it increasingly difficult to continue. Of course, it’s conceivable that we’re missing some scientific insight that civilisations in the distant future might exploit to circumvent this fate, but I believe the current best guess is that in the long run, everything dies.

  6. Greg– To get the right answer within our observable universe, of course the right thing to do is not to only conditionalize on our macrostate, but also impose some sort of past hypothesis in the form of a low-entropy early condition. The question is, why?? You have a theory that says the overwhelming majority of macrostates of this form make a certain prediction for what comes next, and that prediction doesn’t come true, and instead of rejecting the theory you just place extra conditions on the space of allowed trajectories until you fit the data?

    Again: this absolutely is the right thing to do in the real world. But I would like to have a theory that predicts it should be that way, rather than just imposing it as an extra condition. It would be one thing to impose a condition on the evolution of the universe (which seems more physical, if still kind of ad hoc), but you’re imposing a condition on which moment in the universe’s history we find ourselves, which just seems completely arbitrary to me. I can’t stop you from doing that (because it does fit the data), but it seems to just be avoiding a really useful clue about fundamental physics that nature is trying to give us. (I think Ben is on the same track, but I don’t want to put words in his mouth.)

    Brian– I don’t think that thinking about brains and biology is really useful in this context, despite the temptation. It’s entropy that is important, and its behavior is straightforwardly predicted by conventional statistical mechanics. On the other hand, it’s true that “complexity” (whatever that means) has gone up since the Big Bang. Understanding that is a very important problem, but a little tangential to this one.

  7. I’ll say it again but no more 😉

    Don’t confuse dynamics with initial conditions. It is so much easier to set up low entropy initial conditions than high ones. So that is what we should expect, and that is what we see.

  8. Ben Button, you’ve misunderstood me if you think I’m championing anything that requires less explanatory work from scientists than Sean’s approach. On the contrary, what I’m trying to do is rule out premature pseudo-explanations, which are predicated on the assumption that we can use typicality for extra leverage in entire domains where it has not been empirically tested.

    When someone disputes the usual assumptions of thermodynamics for, say, an Earth-bound system like a mixture of ice and water, it’s very easy to force them to see the error of their ways. If Alice thinks an ice cube is overwhelmingly likely to dissolve in ten times its mass of boiling water, while Bob just flips a coin and says “Heads the ice melts, tails the water freezes”, she can make him look foolish, or clean him out in a series of bets, very quickly. So the assumptions underlying her predictions aren’t just down to conceptual elegance: they’ve been tested, and found to work.

    All that I’m arguing is that we have no right to push this assumption of typicality far beyond the domain where it’s been established. It’s OK to keep it as a working hypothesis, but we need to be honest about that. It’s no good saying “use typicality everywhere, or you have no right to use it anywhere”. I don’t lose my Liouville-measure privileges for ice cubes simply by refraining from taking exactly the same approach to cosmology that I take to tabletop thermodynamics.

    I don’t know why you imagine any of this leads to less motivation to derive the low entropy initial conditions of the universe from first principles. Nothing I said championed the notion that we came from a meaningless low-entropy fluctuation; what I said was that I assumed that hypothesis was false, but I’m waiting for someone to do the real work required to turn my assumption into a well-founded belief, instead of waving the typicality wand and pretending that they already know, with near-certainty, that there never have been and never will be Boltzmann brains, anywhere, ever.

    Asking “who benefits” is not a strange way to do science, it’s asking for a precise statement of what we’re aiming to optimise with a particular strategy for dealing with a lack of certainty. It’s not about grubby materialistic concerns, it’s just asking for a tangible illustration of what someone is claiming, when they claim to have probabilistic knowledge. If you tell me “I’ve deduced X, with a high degree of certainty”, but you can’t actually demonstrate X directly, what does that mean? Maybe it just means you’re honestly satisfied with your own reasoning process … but why on Earth should anyone else believe you? With Alice vs Bob and the ice cubes, we can show why we should believe Alice, by showing her win bets against Bob. When someone says “This vaccine saves 100,000 lives for every life it costs in side effects”, we can look at the data and see if that’s really true. But when a cosmologist says “I’m 99.9999999% sure that there will never be Boltzmann brains, anywhere, ever” … if there is any actual content to that claim, it ought to be possible to point to a scenario where people who doubt it are shown to be spectacularly wrong.

    As I mentioned earlier, so far the only relevant scenario I can imagine is the tautological one: if every single being in the history of the universe assumes typicality, then whatever the actual majority is, they will have correctly guessed their own majority status. But that’s vacuous. Why would we play that game and mistake it for science?

    I don’t mind if people tentatively, and explicitly, assume typicality. What I do mind is people treating this assumption as a substitute for hard evidence. To be clear, I don’t think Sean has done that — I think he’s been very careful in stating his assumptions explicitly.

  9. Hypothetical situation. The universe kicks into reverse and starts going backwards. For some reason every particle in the universe instantaneously reverses course. And also space begins contracting instead of expanding. Everything in the universe hits a rubberwall and bounces back 180 degrees.

    So now instead of expanding, everything is on an exact “rewind” mode, and we’re headed back to the “Big Bang”.

    The laws of physics work the same in both directions…if you solve them forward in time, you can take your answers, reverse the equations and get your starting values, right?

    Okay, so everything has reversed direction. The actual reversal process is, of course, impossible. But after everything reverses, everything just plays out by the normal laws of physics. Only that one instant of reversal breaks the laws of physics.

    TIME is still moving forward in the same direction as before. We didn’t reverse time. We just reversed the direction of every particle.

    So, now photons and neutrinos no longer shoot away from the sun – instead now they shoot towards the sun, which when the photons and the neutrinos and gamma rays hit helium atoms, the helium atoms split back into individual hydrogen atoms, and absorb some energy in the process. Again, no physical laws are broken, and time is moving forward.

    Now, back on earth, everything is playing out in reverse as well. You breath in carbon dioxide and absorb heat from your surroundings and use the heat to break the carbon dioxide into carbon and oxygen. You exhale the oxygen, and you turn the carbon into sugars, which you eventually return to your digestive track where it’s reconstituted into food, which you regurgitate onto your fork and place it back onto your plate.

    Okay. So, still no physical laws broken. Entropy is decreasing, but that’s not impossible, no laws of physics are being broken.

    In this case, it must happen because we perfectly reversed the trajectory of every particle in the universe.

    NOW. Your brain is also working backwards. But exactly backwards from before. Every thought that you had yesterday, you will have again tomorrow, in reverse. You will unthink it.

    My question is, what would you experience in this case? What would it be like to live in this universe where time is still going forward, but where all particles are retracing their steps precisely?

    The laws of phsyics are still working exactly as before, but because all particle trajectories were perfectly reversed, everything is rolling back towards the big bang.

    In my opinion, we wouldn’t notice any difference. We would NOT experience the universe moving in reverse, we would still experience it moving forward exactly as we do now…we would still see the universe as expanding even though it was contracting, we would still see the sun giving off light and energy even though it was absorbing both. In other words, we would still see a universe with increasing entropy even though we actually would live in a universe with decreasing entropy.

    And why would that be the case? Because our mental states determine what is the past for us and what is the future. There is no “external arrow of time”. The arrow of time is internal. The past is the past because we remember it and because the neurons of our brains tell us that it has already happened to us. The future is the future because it’s unknown, and because the neurons of our brains tell us that it will happen to us soon.

    If there is an external arrow of time, it is irrelevant, because it doesn’t affect the way we perceive time. Our internal mental state at any given instant determines what is the future and what is the past for us.

    In fact, you could run the universe forwards and backwards as many times as you wanted like this. We would never notice anything. We would always percieve increasing entropy. For us, time would always move forward, never backwards.

    My point being, as always, that our experience of reality is always entirely dependent on our brain state. We can’t know ANYTHING about the universe that is not represented in the information of our brain state at any given instant.

    Forwards or backwards, it’s all just particles moving around, assuming various configurations, some of which give rise to consciousness.

  10. Ben Button wrote:

    “Yes, my theory predicts that there should be easily observed processes violating CPT invariance all over the place, but the thing is, every time that happens, just by my damned bad luck, a small black hole nucleates and swallows the particles, then it evaporates and nobody noticed, yes, I know that seems unlikely, but it’s not *impossible* right?…..”

    So you’re trying to compare (A) a completely arbitrary made-up theory for which there is no evidence whatsoever, with (B) the widely accepted proposal that the universe started with a low entropy Big Bang and will eventually undergo heat death. (B) is a plausible consequence of all of physics and cosmology as it’s presently understood. That might well change, but so far it hasn’t.

    Assuming theory (B), by the way, it’s close to inevitable that some life will arise prior to the heat death. Saying “we are that kind of life, rather than the later kind” is not a bizarre implausibility; it’s close to certain that it’s true for someone. If I win one lottery, I should not be amazed: someone had to win, and I have a perfectly good theory of lotteries, consistent with everything else I know about the world, that allows for one winner and millions of losers. If I win the lottery twice, that’s when my theory is massively demoted, because it does not predict that a double winner is inevitable. Claiming that we are pre-heat-death life in a universe with a massive amount of post-heat-death life is just one lottery win, not two.

  11. Again, maybe im missing something really obvious but after perusing some of the literature (Page, Hartle, Susskind and others) and reading the comments I still fail to see where the problem is.

    There’s something a little fishy about arguing that impossing a low entropy initial condition right after the beginning of the universe is somehow ‘extra’ baggage to a theory, and that it should be explained or derived instead. Well, this extra baggage is as Feynman says the reason that we perceive the human notion of time (past/present/ cause/effect) in the first place, and usually we take that as an axiom when building final theories, and not a theorem. Its conceivable that theres a roundabout way of deriving it starting from different axioms, but I doubt it will teach us any new physics.

    I also fail to see why the anthropic principle is not clearly a valid argument here. If you had very high entropy in the beginning of the (assume for now finite and young) universe after inflation, stars would be unfavored to form and so on. The only way out of that is to argue like Boltzmann did, but then I think everyone agrees that leads to inconsistent observations. So again, where’s the problem?

  12. Sean, thanks very much for the reply. However, I am still having difficulty from how we get from entropy to probability. There have been many comments since and perhaps that has been answered, but from a brief skimming I don’t one that I find compelling. The issue I still have (which has since been raised in the comments also) is that, while I agree higher entropy fluctuations are more probable than lower entropy fluctuations, most of the latter will not give rise to sentient observers capable of assessing the Boltzmann paradox, so it does not seem fair to me to count all of them when assessing relative probabilities of how we ended up in the sort of universe we see. (I am not even sure a Boltzmann Brain would count – can it build telescopes and make observations, or does it only have the illusion of doing so? Nor would a single solar system be likely to do so, based on current observations.)

    So it still feels somewhat like the Lottery-winner Fallacy (a standard creationist argument) to me. I must agree that getting here from a statistical fluctuation is an unlikely event, so it would be neater to find a more likely way, but still do not feel we have enough data to assess how big the lottery is or how long it has been running, so as to rule out the possibility of our winning it by chance (if this is winning) by some sort of “explanatory filter”. Your invocation of Occam’s Razor carries some weight as a way to guide our thinking and further research, but as you know it is just a methodological guide, not a rule of logic.

    (I of course am not expecting a further reply at this late point, but just wanted to clarify my position. )

  13. Low Math, Meekly Interacting

    I’m just wondering if my reasoning is wrong-headed here. I’ll be pretty darn happy if they are, at any rate.

    Anyway, say the universe keeps expanding to the point of heat death, as currently expected, and no matter where you are, there’s nothing to see within any causal horizon but an ultra-cool bath of Hawking radiation. Forever. Except, occasionally, since we’ve got an infinite amount of time to wait for them, Bolzmann brains, even Boltzmann-brain-eating zombies, will spontaneously pop out of the vacuum. It’s absurdly unlikely, but, again, because we’ve got literally forever to consider, all events, no matter how improbable, eventually will happen. They must. Just as a universe like ours, with a low-entropy past, and a high-entropy future, as ridiculously stupendously meganormously unlikely as even THAT is, must happen in the end. And again, too. In fact, there must eventually be a universe that buds off of our universe that has the same history as our universe, in which my Doppelgänger is asking this very question on a blog just like this one. Actually, there must be an infinity of these.

    In sum: We’re talking about probabilities in a cosmic model that, if I’m not mistaken, allows for infinity, somewhere in the past or the future, or both. “Eternal” inflation, right? In which case, I’m not sure why Boltzmann-brainian scenerios of any kind that apply to the megaverse, using simply the rules of that model, don’t all happen if the proposed hypothesis does not blatantly violate the laws of that model. Unless ours is the first universe, or we can show somehow that there’s some manageably finite number of universes in our past from which ours has budded, is it not staggeringly likely that we ARE a statistical fluctuation? Mustn’t we be, because, with infinite time to wait for such things, the the fact that our existence as the consequence of a statistical fluctuation approaches inevitability, no matter what the probability?

    I just wonder if, with infinities lurking somewhere, questions of cosmological origins and so forth aren’t plagued with these “anything goes” consequences. How can they not be?

  14. There is a discussion of the same theme (seeking the foundations of statistical physics in the initial conditions of the universe) in Sec. 2.1 of The Feynman Lectures on Gravitation. This lecture was delivered in fall of 1962. Remarkably, Feynman was teaching the sophomore year of The Feynman Lectures on Physics concurrently with this graduate-level course on gravitation. The freshman lecture on the foundations of thermodynamics, quoted by Sean, was just a few months earlier, in spring on 1962. In the concluding paragraph of Sec. 2.1, Feynman says that “The question is how, in quantum mechanics, to describe the idea that the state of the universe in the past was something special.”

    I had several discussions with Feynman during the early 1980s about inflationary cosmology. He was interested in the topic, but always raised the same objection: How does one justify appealing to quantum field theory on a semi-classical background spacetime? His point was that one needs to explain why the initial state was special not just for the matter but also for the gravitational degrees of freedom.

    I suspect that recognizing that the justification of the second law is really a problem in quantum cosmology was an unusual insight in 1962.

    Incidentally, in a later lecture in the same course Feynman argues for Omega=1 based on naturalness: “the gravitational energy is of the same order as the kinetic energy of the expansion — this to me suggests that the average density must be very nearly the critical density everywhere.”

    Kip Thorne and I wrote a foreword for the Lectures on Gravitation in 1995, pointing out these and other insights from the lectures.

  15. Low Math, Meekly Interacting

    And I mean to say, I think I get what the BB paradox is about, I think I kind of get what Feynman is saying, I just don’t understand why infinity doesn’t trash that logic, or any logic, for that matter. Unless infinite time frames can be eliminated, I just don’t see how one can forbid anything, even if probability argues very strongly against it. Isn’t this the nature of singularities, that the rules break down? Are we assuming we know the rules avoid this breakdown? Is the argument against the paradox truly so strong that it can suppress these consequences of infinity on its own? Because I don’t see myself how the incredibly huge probability that there is some better explanation for the low-entropy state we originated from “cancels out” the absurd improbability of the fluctuation hypothesis, given that infinity forbids nothing. We just happen to be an unlikely consequence in an infinite array of consequences, all of which presumably “exist”. So what, then?

  16. The argument put by people like Page is that we should conclude the universe will decay on a time scale rapid enough to prevent BBs from ever forming. But if we tabulate the consequences of different strategies for reasoning about our observations, I don’t see any significant advantage for that approach.

    To keep things simple, assume two possible universes. Both universes start with a low-entropy Big Bang and contain exactly 1 genuine, pre-heat-death, non-fluctuated region that matches everything we’ve ever observed.

    — Universe A decays before any Boltzmann brains form.
    — Universe B does not decay until it has experienced N thermal fluctuations that match everything we’ve ever observed, but which reveal their nature as fluctuations on the next observation, along with M fluctuations that also match everything we’ve ever observed, but continue to look like non-equilibrium systems even on the next observation. Assume that N is very large, while M will, of course, be vastly smaller than N.

    Consider two possible strategies for dealing with our next observation of part of our surroundings, the system P:

    — Strategy 1 says if P is in thermal equilibrium, conclude you’re in Universe B, and if P is not in thermal equilibrium, conclude you’re in Universe A.
    — Strategy 2 says if P is in thermal equilibrium, conclude you’re in Universe B, and if P is not in thermal equilibrium, remain agnostic about which universe you’re in.

    If the universe is Universe A:

    Strategy 1 leads to 1 civilisation correctly concluding they’re in Universe A.
    Strategy 2 leads to 1 civilisation remaining agnostic.

    If the universe is Universe B:

    Strategy 1 leads to:
    — N civilisations correctly concluding they’re in Universe B;
    — M+1 civilisations falsely concluding they’re in Universe A.

    Strategy 2 leads to:
    — N civilisations correctly concluding they’re in Universe B;
    — M+1 civilisations remaining agnostic.

    So there are pros and cons for both, but certainly no spectacular advantage for Strategy 1.

    N being large is a red herring; it makes no difference to the relative advantages. The fact that 1/N is tiny, and that the pre-heat-death civilisation in Universe B is hugely atypical, is beside the point.

  17. I just took delivery of a truckload of 1 million quarters [American 25 cent coins]. They were just poured out of the truck any old way. To my surprise, it turned out that *every single one of them* was lying there with heads up.

    Question: should I have been surprised?

    I ordered another truckload to be delivered tomorrow, again consisting of one million quarters.

    Any predictions as to the number of heads that will turn up? If I get one million heads again, should I be surprised? Should I seek for an explanation, or just accept it as one of those things that are bound to happen now and then?

  18. Prof Preskill:
    Thanks very much indeed for your comment. [By the way, I hope Prof Preskill won’t mind my pointing out that the full text of his and Thorne’s preface can be found easily by googling.]

    These may seem like strange questions, but just to be perfectly clear, would you agree with the following statements?

    [a] Feynman believed that all manifestations of the second law of thermodynamics are ultimately due to the special initial conditions at the beginning of time [ie, that all such manifestations are ultimately cosmological in nature].

    [b] Feynman believed that some as-yet-unknown physical law was responsible for the special initial conditions.

    Thanks!

  19. Ben, I don’t know what you imagine your truck of magic quarters is analogous to, so you might want to engage directly with something I’ve actually said if you want to dispute it.

  20. Sean wrote:

    The point is that we can conditionalize over absolutely everything we think we know about the current state of the universe — either, to be conservative, exactly over the condition of our individual brains and their purported memories, or, to be more liberal, over the complete macrostate of a universe with 100 billion galaxies etc. And then we can ask, given all of that, what are we likely to observe next, according to the Liouville measure on phase space compatible with that macrostate?

    But isn’t that implicitly assuming that there’s only one experimenter, in one particular (and hence arguably typical) microstate? In the Boltzmann brain scenario, though, the whole idea is that there are a vast number of observers who, at least initially, think they’re in broadly similar situations: it looks to them as if there was a low entropy Big Bang 14 billion years ago.

    So long as at least some of that vast number really are 14 billion years or so after the Big Bang, the proportion isn’t relevant: at the next observation, those who genuinely belong to the early universe will see a system far from equilibrium. Sure, the vast majority will see equilibrium instead, and they will correctly conclude that they arose from thermal fluctuations, but the existence of a non-zero minority who see disequilibrium remains a near certainty. (I say “a near certainty” only because it depends on what you think the probability is that life can evolve prior to heat death. I think most people would rate that as being quite high.)

    Conditioned on everything we think we know about the current state of the universe, the probability of a single typical observer in a universe containing Boltzmann brains seeing disequilibrium at their next observation is minuscule. But it’s that word “single” that smuggles in the selection fallacy that Hartle and Srednicki warned against.

  21. Hi Sean, et.al.,

    I appreciate this great discussion, but I’m hung up on an earlier point and I hope someone can help me out.

    When I read a phrase like “The problem is that the early universe looks very unnaturally ordered to us” (from a reply at 12:23 12/29 by Sean) I have trouble matching that to any actual picture I have of the early universe. In the standard hot big bang model the early Universe is filled with a nearly-relativistic, nearly-ideal gas of particles in very good thermal equilibrium; in what way can that state be called “unnaturally ordered”? To the contrary, if we pick any particular epoch when the Universe has some particular energy density, then it looks to me as though any fiducial volume is actually at _maximum_ entropy for its energy density.

    I don’t mean to dispute the general point made now by you, Penrose, Feynman and other luminaries, that the entropy of today’s Universe is increasing and so must have been lower in earlier times. But it’s also clear that the early thermal universe — which is actually most of the Universe’s history, if we count time logarithmically — was, given its global constraints, at maximum entropy. So this pushes the question of the “specialness” or “orderliness” of the early Universe back to why it had those constraints. As I count them there are basically two constraints which specify the early thermal universe in the standard model: (1) The chemical potentials for all massive species of particles are negligibly small, and (2) The metric is smooth, ie there are no black holes and not a lot of energy present in gravity waves.

    The second of these is mentioned explicitly by both Feynman and Penrose, for example, and involves many interesting questions: Why didn’t primordial black holes form at a very early epoch? What would a fully equilibrated gas of gravitons look like? etc. But I’m actually more interested in the first of these, which suggests a possibly deep connection between the arrow of time and baryogenesis. The entropy per co-moving volume is (nearly) constant during the Universe’s early thermal phase; it’s only after matter domination begins that entropy can increase through gravity and structure formation. But this Universe wouldn’t have gone over to matter domination without a finite chemical potential for a massive particle species — ie baryo-(and lepto-)genesis. So it seems to me that baryogenesis (or its equivalent in other Universes) is kind of a “gateway” for entropy growth. What do you think? does this point ever come up in arrow of time discussions?

  22. Paul– I think you are doing exactly what, as John Preskill mentioned, Feynman warned us not to do — separating out the gravitational degrees of freedom from the matter degrees of freedom. That might be convenient for how we think about it, but there’s no law of nature which says “the entropy of the matter degrees of freedom in a closed system will tend to increase.” Nature doesn’t distinguish between matter and gravitation, as far as entropy is concerned.

    The “global constraint” of the universe being smooth (and also very dense) is not a constraint at all — it’s a fact about the configuration of the early universe, which needs to be explained. A constraint is something that stays constrained, not just a temporary feature of a configuration. It’s like saying “sure, the gas in that box is all on one side, but I’ll just call that a global constraint.” The gravitational state of the early universe had a tremendously low entropy, and that’s what we’re all trying to understand.

    I’m not sure about the chemical potential business. For one thing, in the real world, most of the matter is dark matter, so baryogenesis is not at all necessary for matter domination. For another, even if it weren’t for dark matter, baryons would eventually have dominated if it weren’t for the cosmological constant.

    Greg– Yes, I am certainly appealing to “typicality” in that extremely weak sense. Namely, that once I specify everything I know about the macrostate of the universe, I assume our state is typical according to the Liouville measure over microstates compatible with that macrostate. But again, that’s just what we do all the time in everyday stat mech, when we try to predict the future. (When we try to reconstruct the past it’s a different story, where the past hypothesis comes in.)

    I think Hartle and Srednicki were very correct to warn against granting ourselves too much leverage over what the universe is doing over very large scales by assuming that we are a “typical” kind of observer. However, I think it’s going too far to argue that we can’t assume our microstate is a typical element of our macrostate. There may be some justification for doing that, but I don’t think the H&S argument is enough. At face value, not allowing us to make that assumption prevents us from doing stat mech altogether, which I think is what Ben is getting at. Every time we observed what appeared to be a statistically unlikely event, we would be instructed to shrug and say “Well, it must have happened somewhere in the universe at some point in time,” rather than suspecting there were some dynamics behind it and using that as a clue to learn something new.

  23. Nature doesn’t distinguish between matter and gravitation, as far as entropy is concerned.

    How can you say that when, as best I can tell, there is no global definition of entropy that includes gravitation?

  24. The definition of entropy is S = k log W, just like it’s engraved on Boltzmann’s tombstone. It’s certainly true that we don’t know what the structure of the space of microstates is, so we have trouble *calculating* the entropy for some kind of spacetime, but the great thing about stat mech is that it doesn’t care about the details of the state space or the Hamiltonian.

Comments are closed.

Scroll to Top