Richard Feynman on Boltzmann Brains

The Boltzmann Brain paradox is an argument against the idea that the universe around us, with its incredibly low-entropy early conditions and consequential arrow of time, is simply a statistical fluctuation within some eternal system that spends most of its time in thermal equilibrium. You can get a universe like ours that way, but you’re overwhelmingly more likely to get just a single galaxy, or a single planet, or even just a single brain — so the statistical-fluctuation idea seems to be ruled out by experiment. (With potentially profound consequences.)

The first invocation of an argument along these lines, as far as I know, came from Sir Arthur Eddington in 1931. But it’s a fairly straightforward argument, once you grant the assumptions (although there remain critics). So I’m sure that any number of people have thought along similar lines, without making a big deal about it.

One of those people, I just noticed, was Richard Feynman. At the end of his chapter on entropy in the Feynman Lectures on Physics, he ponders how to get an arrow of time in a universe governed by time-symmetric underlying laws.

So far as we know, all the fundamental laws of physics, such as Newton’s equations, are reversible. Then were does irreversibility come from? It comes from order going to disorder, but we do not understand this until we know the origin of the order. Why is it that the situations we find ourselves in every day are always out of equilibrium?

Feynman, following the same logic as Boltzmann, contemplates the possibility that we’re all just a statistical fluctuation.

One possible explanation is the following. Look again at our box of mixed white and black molecules. Now it is possible, if we wait long enough, by sheer, grossly improbable, but possible, accident, that the distribution of molecules gets to be mostly white on one side and mostly black on the other. After that, as time goes on and accidents continue, they get more mixed up again.

Thus one possible explanation of the high degree of order in the present-day world is that it is just a question of luck. Perhaps our universe happened to have had a fluctuation of some kind in the past, in which things got somewhat separated, and now they are running back together again. This kind of theory is not unsymmetrical, because we can ask what the separated gas looks like either a little in the future or a little in the past. In either case, we see a grey smear at the interface, because the molecules are mixing again. No matter which way we run time, the gas mixes. So this theory would say the irreversibility is just one of the accidents of life.

But, of course, it doesn’t really suffice as an explanation for the real universe in which we live, for the same reasons that Eddington gave — the Boltzmann Brain argument.

We would like to argue that this is not the case. Suppose we do not look at the whole box at once, but only at a piece of the box. Then, at a certain moment, suppose we discover a certain amount of order. In this little piece, white and black are separate. What should we deduce about the condition in places where we have not yet looked? If we really believe that the order arose from complete disorder by a fluctuation, we must surely take the most likely fluctuation which could produce it, and the most likely condition is not that the rest of it has also become disentangled! Therefore, from the hypothesis that the world is a fluctuation, all of the predictions are that if we look at a part of the world we have never seen before, we will find it mixed up, and not like the piece we just looked at. If our order were due to a fluctuation, we would not expect order anywhere but where we have just noticed it.

After pointing out that we do, in fact, see order (low entropy) in new places all the time, he goes on to emphasize the cosmological origin of the Second Law and the arrow of time:

We therefore conclude that the universe is not a fluctuation, and that the order is a memory of conditions when things started. This is not to say that we understand the logic of it. For some reason, the universe at one time had a very low entropy for its energy content, and since then the entropy has increased. So that is the way toward the future. That is the origin of all irreversibility, that is what makes the processes of growth and decay, that makes us remember the past and not the future, remember the things which are closer to that moment in history of the universe when the order was higher than now, and why we are not able to remember things where the disorder is higher than now, which we call the future.

And he closes by noting that our understanding of the early universe will have to improve before we can answer these questions.

This one-wayness is interrelated with the fact that the ratchet [a model irreversible system discussed earlier in the chapter] is part of the universe. It is part of the universe not only in the sense that it obeys the physical laws of the universe, but its one-way behavior is tied to the one-way behavior of the entire universe. It cannot be completely understood until the mystery of the beginnings of the history of the universe are reduced still further from speculation to scientific understanding.

We’re still working on that.

114 Comments

114 thoughts on “Richard Feynman on Boltzmann Brains”

  1. Sean said

    “So one of the assumptions of the model — bounded phase space, eternal evolution, microscopic irreversibility, time-independent Hamiltonian — must be false. It’s interesting to try to figure out which one.”

    Would simply falsyfying one of those conditions be enough? I mean lets say that we add a small irreversible term in the microscopic Hamiltonian. (It seems like if has to be small since we haven’t found it yet.) The problem is that Boltmann Brains are not a decent approximation of our universe that isn’t exact, it is so spectacularly wrong that it seems odd that this can arise from a term that is so small.

  2. Right, there’s no reason to think that abandoning one of those assumptions would be sufficient, but it seems necessary. The trick is to abandon as little as possible while working toward a sensible theory that predicts the kind of universe we actually see; no mean feat.

  3. changcho: Ratchet & Pawl refers to two parts of mechanism that enforce motion in a single direction. The ratchet has the directional teeth and the pawl catches each tooth as it passes so that it cannot go backwards.

    capitalistimperialistpig: I think XKCD addressed your approach, but without reaching a conclusion in http://xkcd.com/505/

    Also, being a computer guy I liked Scott Aaronson’s quantum computation oriented view of the problem in http://scottaaronson.com/blog/?p=368. He argues that the arrow of time has to do with space being reusable while time is not. This sounds circular, but he points out that if time were reusable we would have a much different universe, so perhaps the arrow of time is simply a conditional property of any multidimensional structure with one or more dimensions lacking reusability. That seems to push the problem up the tree for a bit, but it might offer us a hint.

  4. In Tegmark’s mathematical multiverse the problem is much more severe, because as long as some simple mathematical model generates observers as Boltzmann brains, you are forced to address the question why we are not Boltzmann brains in the universe defined by that particular model.

    One could try to solve this problem by considering observers as universes in their own right.

    On the set of all mathematical models one needs to specify a measure. There exist a natural measure on the set of all algorithms (the Solomonoff-Levin distribution). This distribution decays exponentially for large algorithmic complexities.

    Now, an observer considered as a universe has a huge algorithmic complexity unless the observer can be generated in a simple to specify universe. The information needed to specify an observer can then be provided by specifying the laws of physics, the initial conditions, and the location of the observer. The amount of information will then be less, but not if the observer only arises as a Boltzmann brain.

  5. SOV said: “And I repeat – I don’t know of anyone who seriously thinks that the very early universe was just ’some collection of random particles’.”

    Then you haven’t met Andrei Linde….but anyway, how *do* “serious” people think about the very early universe, in your experience?

  6. I am a biologist and therefore not as “qualified” as physicists (who’ve thought about this a LOT more than I have) but it seems to me there is a fundamental flaw to most of the arguments here. The underlying assumptions, if I’ve got it right, are that (1) the universe if made of particles (let’s ignore the particle versus wave business for this discussion), (2) let’s shake ’em up and see how they should statistically sort out, thinking about entropy, etc. And so how we we get stuff as incredibly complex as what we observe today?

    And the problem with this thinking, in my view, is that it makes the assumption that the particles don’t interact with one another to any extent except for by random collision (largely inelastic) and maybe some weak gravitational stuff that’s not significant on a particle-to-particle basis. But we know that the universe of today is composed of all kinds of “stuff” that is differentially sticky. It’s called chemistry (and its underlying physics of course). So, once particles came into existence, and became different from one another (elements for example), then they would have differential attractions/repulsions and these would favor “stuff” segregating from other “stuff”. Viola, asymmetries that are less likely to succumb to entropic falling apart. For example, rocks. Ignoring erosive forces, you can’t tell me a rock is likely to spontaneously fall apart because entropy favors its randomization with the rest of the universe. Yes, physics says this is possible. But not probable. And it is probabilities we are focused on with these types of examples.

    I hope my thoughts here aren’t way off base as I know how it is to read someone’s nutty ideas on a subject about which they ought not be putting in their two cents. If this is offbase, read on.

    Me

  7. Speaking of Feynman on the arrow of time, I found in this interesting article:

    http://plato.stanford.edu/entries/time-thermo/

    the following statement:

    “But perhaps we were wrong in the first place to think of the Past Hypothesis as a contingent boundary condition. The question ‘why these special initial conditions?’ would be answered with ‘it’s physically impossible for them to be otherwise,’ which is always a conversation stopper. Indeed, Feynman (1965, 116) speaks this way when explaining the statistical version of the second law.”

    The reference is to “The Character of Physical Law”. Certainly it would be extremely satisfying if low initial entropy turned out to be a consequence of a demand that the laws of physics should be internally consistent…..

  8. Me– always happy to hear from biologists (or whomever). None of the arguments above really depend on the assumptions you are worried about; it’s just an easy short-hand way of speaking. When things like chemistry come into the game, all of our ideas about entropy and statistical mechanics work just as well as ever, but you have to be very careful to really keep track of everything in your system. In particular, when two atoms stick together, they enter a lower-energy state, and the only way they do that is by emitting one or more photons. The entropy of the system “molecule + photons” will be higher than that of the system “two separate atoms, no photons.” (At least, if the reaction is thermodynamically favored; more generally, there will be some equilibrium distribution of possibilities.)

    But the entropy would be even higher than that if the molecule collapsed into a very small black hole, then decayed into a number of photons. Basically, a truly high-entropy state wouldn’t look anything at all like the stuff we see around us, not by a long shot; so details about chemical reactions aren’t going to be very important to the discussion.

  9. Roger Penrose has been working on this whole quandary from the perspective of the universe as a whole. At a public lecture I attended a few months back he argued for a cycling universe where at the end of each phrase you have a state in which all matter has decayed to entirely energy, (photons) leaving in essence simply only a energy potential where locality would essentially have no meaning and as such this potential alone would lead into the following phase. He contended that there could be a relic of a past phase(s) hidden within CMB data. It would be interesting to learn if recent analysis has strengthened or weakened his proposal as it does seem to be a nifty way of getting around this low entropy/high energy initial state problem.

  10. I still fail to see why there is a conceptual problem with having a low entropy early universe (by that I mean, relative to the higher entropy state we have now). Indeed you can take the point of view that its simply a boundary condition thats imposed by hand to match observation, or that its simply the natural progression of the thermodynamic arrow of time. So whats the fuss?

    If the opposite were true (a high entropy initial state), then we wouldn’t be here b/c life would have had immense difficulty to form, and the laws of thermodynamics would be in jeapordy of being falsified. So I’m missing something.

    I think the confusion arises b/c there seems to be a paradox when the operation t — > -t is performed too naively and not interpreted with due care. Feynmann perfectly explains away this problem in his lectures.

  11. “So whats the fuss? ”

    The fuss is that we have a feature of the universe that we don’t know how to explain. *Why* was the entropy so low? We don’t know. Nobody is claiming that there are conceptual problems with having low entropy; on the contrary, that is a fact accepted by all.

    The fact that we don’t know how to explain this aspect of the early universe means that there is something missing from our theories; probably something so important that neglecting it may mean that we are saying lots of wrong things.

  12. Fred, the problem is that your statement: “then we wouldn’t be here…” is not true, because you would be here as a Boltzmann brain. In fact, the low low entropy initial conditions don’t solve the problem, as you would still be more likely to find yourself in a Boltzmann brain state long after the heat death of the universe than existing in the way you do now.

    See also this article:

    http://arxiv.org/abs/hep-th/0612137

  13. How does a system that has evolved to collapsed to a higher entropy, lower energy state (in the extreme case, a black hole) fluctuate away from it?

    In the long run in an open universe Hawking radiation should evaporate black holes. We are talking very long times now, longer than the decay of matter itself.

  14. Maybe there will be an infinite number of Boltzmann brains after the heat death of the universe … or maybe there won’t, because the universe will decay in some fashion that precludes that heat death. Eventually we might know the answer to that question — by directly uncovering the details of the fundamental physics and cosmology that will determine these thing. But our own failure to be Boltzmann brains says nothing whatsoever about the relative frequencies of different kinds of observers across the entire history of the universe.

    Specifically, the fact that we are not BBs is not a valid way to rule out cosmological models in which most observers are BBs — so long as those models do not entirely preclude non-BB observers like us.

    Nobody plucked us at random from a bag containing every conscious being that ever lived or ever will live. We cannot infer anything, even probabilistically, about the number of Boltzmann brains across the whole of spacetime from our own failure to be one. The tiny grain of truth from which that fallacy springs is this: if every conscious being that ever lived, or ever will live, uniformly adopted the strategy of assuming that most observers resembled themselves — in other words, if absolutely every observer has a policy of assuming that they are in the majority class — then that will lead to the greatest number of observers being correct. Whatever the majority actually is — Boltzmann brains or normal brains — the majority will have guessed correctly what the majority is.

    But the tautological results of that victory for the majority don’t provide us with any information about other observers anywhere, least of all other observers in the distant future. We exist, and we’re probably not Boltzmann brains. That’s it, that’s all the data we actually have. Various averages computed over the set of all observers contain more information … but we don’t have access to those averages.

    What’s more, we have no rational reason for doing things — or believing things — solely because they optimise those kinds of averages. This is not a game where we get some share of the pay-off for being a good team player. I don’t know what the correct label is for saying “Hey, if we assume it’s likely that we are the majority class, then we will be adopting a brilliant strategy that — if adopted by all observers — will lead to a high expectation value for the proportion of all observers who were correct” … but I don’t know why anyone would mistake such a strategy for any of the goals of science.

  15. I think one can also make the argument that the “arrow of time” must always point in one direction simply because distance and time are intimately linked to one another. Since there is an upper bound for the propagation of information (the speed of light), non-local events will always be ordered as occuring in the past based on relative distance.

    There is a great non-attributed quote that states

    “There is simply no means in nature, other than energy transfer, by which information may be acquired across a distance.”

    It follows that there is always a certain amount of energy tied up in the transfer of information, and as the universe expands, more energy will be tied up in simple transfers of information.

    Since it takes time for that energy to transfer, it also means that the information is effectively stored until it finally interacts at its destination.

    Over time, more and more energy is simply tied up to facilitate the transfer of information (and is thus is lost to the “environment”)

    We begin seeing that there is intimate linkages between time, distance, energy, information, memory and entropy.

  16. It is the low entropy start that “allows” the universe to use time symmetric dynamics. One is doomed if one tries to explain it the other way around.

    The initial condition and the dynamics were created together in a self consistent way. One cannot assume that the dynamics could determine the initial condition.

  17. Thanks, Sean
    excellent read
    “It is part of the universe not only in the sense that it obeys the physical laws of the universe, but its one-way behavior is tied to the one-way behavior of the entire universe”
    I totally agree with u
    keep up the good work I will be a regular reader of ur blog

  18. Sean — Excuse me if I’m being dense, but I don’t understand how that restatement of the argument makes any difference.

    Your version: “the statistical-fluctuation hypothesis makes a very strong prediction: namely, that every time we look somewhere in the universe we haven’t looked yet, we should see thermal equilibrium. And we don’t, so that hypothesis is falsified.”

    But really, isn’t what the theory is saying (CAPS for emphasis) more precisely this: “namely, that every time we look somewhere in the universe we haven’t looked yet, we should PROBABLY see thermal equilibrium. And EVERY TIME we don’t, that hypothesis BECOMES LESS AND LESS LIKELY.”?

    (Of course, even this makes a big assumption about conditional probability, i.e. that whatever fluctuation caused the parts of the universe we already looked in to be out of equilibrium also caused the other parts we haven’t looked in yet to also be out of equilibrium. But I’ll let that go for now.)

    If that’s not a more precise restatement of the argument, then I’m being dense and don’t get it at all. But if it *is* a more precise restatement, here’s the central point that I think proponents of this argument are missing:

    IT DOESN’T MATTER how preposterously unlikely the hypothesis becomes WHEN YOU ONLY RUN ONE TRIAL. And looking in lots of places in one universe still only counts as one trial, since this is fundamentally an argument about how the *whole universe* came to be so disordered.

    To put it another way: We know that this universe, as unlikely as it might be, did happen. I don’t know of any law of probability that allows one to work backwards from the outcome of ONE trial to inferences about the population that trial was drawn from… and it seems to my limited understanding, that’s exactly what this argument is trying to do.

    Consider this analogy: suppose you are told that a large sack contains white balls and black balls, but you don’t know how many of each. You pull one ball out, and it’s black. What can you say about the contents of the sack? A lot of people might say that it’s “unlikely” the sack contains only one black ball and hundreds of white ones, because then it would be unlikely that you would draw the one black ball. But that would be completely wrong: you just can’t reason backwards like that from one drawing. By contrast, if you drew lots of balls, you could state the probability of any given proportion of balls. But one ball alone tells you only one thing: the possibility that all the balls are white is eliminated.

    In the same way, any observation about our own universe really only tell us one thing: the possibility that low entropy states never happen is eliminated. It doesn’t tell us anything about the relative likelihood of the states.

    Again, I apologize if I’m just not getting it… but I really want to understand this, and in particular understand why this point about arguing backwards from one universe to the population of possible universes is wrong.

  19. Well, I thought that since we “knew” that the universe began in a Bang (not “BB” to avoid confusion with Boltzmann Brains) we weren’t supposed to worry about potential fluctuations over incredible time scales. But in any case, the “initial conditions” of the universe are obviously relevant, and so one has any idea of what that a priori ought to have been (as I have argued, any particular “that’s just the way it is” violates the logical principle of sufficient reason, and leads many to modal realist type scenarios of every possible way to be actually existing.)

    Another factor though: if reality were fundamentally deterministic, the details of the outcome forever would at least have to follow from those initial conditions. But they can’t. Note for example, a free muon “prepared” like any other. But we don’t know when it will decay. Despite some pretensions involving things like “decoherence,” that decay moment is inexplicable. You don’t believe there is a “clockwork” inside the muon, unless maybe you consider strings, nor are there relevant environmental influences, right? But even then, that would mean we could in principle prepare a “five nanosecond muon” etc, and there would be contrasting structures to the decay patterns of various populations even if we couldn’t design specific hard outcomes. That is “real randomness” that pure math can’t even model the particular outcomes of. Hence we figure that whatever has some tiny chance of happening will happen often enough, given enough time or spatial extent. The really worrisome thing is, if the universe is “infinite” in extent then the time since Bang is not the issue, but having all those places to try every conceivable outcome – which could include Boltzmann Brains. (BTW, I note that many of the readers and commenters/wanna be commenters of this wide-appeal (via “Discover” mag) Blog are not professionals and cannot be expected to carefully fit right within perfect boundaries of on-topic propriety, coherence, and pithiness; just saying.

  20. Greg Egan– That’s exactly the argument given by Hartle and Srednicki, and in ordinary circumstances it would be compelling, but this is one case where it doesn’t hold. The point is that we can conditionalize over absolutely everything we think we know about the current state of the universe — either, to be conservative, exactly over the condition of our individual brains and their purported memories, or, to be more liberal, over the complete macrostate of a universe with 100 billion galaxies etc. And then we can ask, given all of that, what are we likely to observe next, according to the Liouville measure on phase space compatible with that macrostate? The answer, as I say here and Feynman says above, is that we should see thermal equilibrium around every corner, and we don’t.

    Note the crucial difference here — I am not assuming that we are picked randomly among conscious beings in the universe. All I am assuming is that we are picked randomly within the set of macrostates that are identical to our own (including my memories of what presents I received for Xmas, etc.). You might be tempted to argue that this is an unwarranted assumption, but I promise you it’s not. For one thing, unlike in the uncontrolled multiverse case, here we know exactly what the measure is, just from conventional stat mech (or quantum mechanics, if you like). Second and more importantly, without being able to make that assumption, we deny ourselves the ability to make any probabilistic predictions whatsover in physics. If we can’t use the Liouville measure conditionalized on our macrostate, all of stat mech becomes completely useless. We are no longer allowed to say that ice cubes tend to melt in glasses of warm water, etc., because any such statement appeals to precisely that measure.

    CarlZ, I think the same reasoning should address your concerns. In particular, we do not know that “this universe actually happened.” What we know is that our current microstate is within some macrostate, defined either by just our brain or by the macrostate of the surrounding universe, depending on how much you want to grant (it really doesn’t matter). But, given the assumptions of the statistical-fluctuation hypothesis, it is overwhelmingly likely that our memories of the past and reconstructions of the previous history of the universe are completely false. We don’t simply have a universe and ask whether it would ever have occurred; we have some particular facts about the universe, and have to ask what the expectations are for other facts given that knowledge. In this hypothesis, those expectations are completely at odds with everything we see when we open our eyes.

    And when we say “less and less likely,” we’re talking overwhelming numbers — like, the probability that all the air molecules in the room will spontaneous move to one side in the next second, but much smaller than that. There is no operational difference between that kind of unlikeliness and simply “ruled out.”

  21. “In the long run in an open universe Hawking radiation should evaporate black holes. We are talking very long times now, longer than the decay of matter itself.”

    Yes and as in all heat transfer, rate is affected by the difference created by the magnitude of potential. That’s to say as the normal matter not contained in black holes decayed to energy, coupled with expansion; the relative temperature differential would increase, thereby aiding to hasten the process. The catch point is Hawking Radiation is suppose to be dependant on the quantum consequence of spontaneously co-created matter and antimatter; so then how would this compare with the classical thermodynamic model, which superficially doesn’t present in being much different? Which is to ask, would the rate (or potential) of such spontaneous creation stay the same or does it also hastens when the average temperature diminishes?

  22. To tell you the truth, I’m not even sure that there is even a problem here. The Boltzmann brain problem seems equivalent to asking why does our universe have an entropy that is much lower than what we think its maximum possible value is. The problem is that some of the high entropy states are not allowed for other reasons. Just because a particular reaction increases the entropy of the universe doesn’t mean it must happen. So a state where the universe is a huge homogenous collection of photons is not allowed because of quantum mechanical conservation laws, GR ect.

  23. BTW CarlZ, you are wrong about the black ball pulled from the sack. If the sack has 1,000 balls, then the chance of the combined circumstance of having drawn a black ball, and all the others being white, is 1/1000 if I reckon the method correctly. Sure, “it could happen” but we can still come up with expectations of chance. Note however also, that if the universe is large and just about everything happens, some observers are looking at their circumstances and saying, “this is so absurd, it has such a tiny chance of happening that there must be something else behind it” etc!

    Maybe I wrote confusingly last comment: I mean, that if there was a determinate (even if “just in principle”) process behind muon decay etc. we could see structure to (or learn to influence) the decay patterns (from preparation differences, environmental influences – which makes a hit against decoherence BTW IMHO). But we don’t, so the actual situation instead is “fundamental randomness” not even the kind you can seem to model by taking e.g. digits of roots of some number – that still gives the same results each time, unlike “identical muons.”

  24. capitalimperialistpig wrote “It is curious, though, that the cosmic billiard balls started out so neatly racked. Is it possible to understand this in any deep sense? Only if you know something about the rack – or the racker”

    Absolutely – All this Boltzmann Brain nonsense is a futile distraction. Perhaps what we need is more insight into how systems of *maximum* entropy can somehow combine or mix to produce a low entropy result, rather in the manner that fuel vapour and air create an explosive mixture, the more so the more uniformly (and thus with high entropy?) they are mixed.

Comments are closed.

Scroll to Top