Richard Feynman on Boltzmann Brains

The Boltzmann Brain paradox is an argument against the idea that the universe around us, with its incredibly low-entropy early conditions and consequential arrow of time, is simply a statistical fluctuation within some eternal system that spends most of its time in thermal equilibrium. You can get a universe like ours that way, but you’re overwhelmingly more likely to get just a single galaxy, or a single planet, or even just a single brain — so the statistical-fluctuation idea seems to be ruled out by experiment. (With potentially profound consequences.)

The first invocation of an argument along these lines, as far as I know, came from Sir Arthur Eddington in 1931. But it’s a fairly straightforward argument, once you grant the assumptions (although there remain critics). So I’m sure that any number of people have thought along similar lines, without making a big deal about it.

One of those people, I just noticed, was Richard Feynman. At the end of his chapter on entropy in the Feynman Lectures on Physics, he ponders how to get an arrow of time in a universe governed by time-symmetric underlying laws.

So far as we know, all the fundamental laws of physics, such as Newton’s equations, are reversible. Then were does irreversibility come from? It comes from order going to disorder, but we do not understand this until we know the origin of the order. Why is it that the situations we find ourselves in every day are always out of equilibrium?

Feynman, following the same logic as Boltzmann, contemplates the possibility that we’re all just a statistical fluctuation.

One possible explanation is the following. Look again at our box of mixed white and black molecules. Now it is possible, if we wait long enough, by sheer, grossly improbable, but possible, accident, that the distribution of molecules gets to be mostly white on one side and mostly black on the other. After that, as time goes on and accidents continue, they get more mixed up again.

Thus one possible explanation of the high degree of order in the present-day world is that it is just a question of luck. Perhaps our universe happened to have had a fluctuation of some kind in the past, in which things got somewhat separated, and now they are running back together again. This kind of theory is not unsymmetrical, because we can ask what the separated gas looks like either a little in the future or a little in the past. In either case, we see a grey smear at the interface, because the molecules are mixing again. No matter which way we run time, the gas mixes. So this theory would say the irreversibility is just one of the accidents of life.

But, of course, it doesn’t really suffice as an explanation for the real universe in which we live, for the same reasons that Eddington gave — the Boltzmann Brain argument.

We would like to argue that this is not the case. Suppose we do not look at the whole box at once, but only at a piece of the box. Then, at a certain moment, suppose we discover a certain amount of order. In this little piece, white and black are separate. What should we deduce about the condition in places where we have not yet looked? If we really believe that the order arose from complete disorder by a fluctuation, we must surely take the most likely fluctuation which could produce it, and the most likely condition is not that the rest of it has also become disentangled! Therefore, from the hypothesis that the world is a fluctuation, all of the predictions are that if we look at a part of the world we have never seen before, we will find it mixed up, and not like the piece we just looked at. If our order were due to a fluctuation, we would not expect order anywhere but where we have just noticed it.

After pointing out that we do, in fact, see order (low entropy) in new places all the time, he goes on to emphasize the cosmological origin of the Second Law and the arrow of time:

We therefore conclude that the universe is not a fluctuation, and that the order is a memory of conditions when things started. This is not to say that we understand the logic of it. For some reason, the universe at one time had a very low entropy for its energy content, and since then the entropy has increased. So that is the way toward the future. That is the origin of all irreversibility, that is what makes the processes of growth and decay, that makes us remember the past and not the future, remember the things which are closer to that moment in history of the universe when the order was higher than now, and why we are not able to remember things where the disorder is higher than now, which we call the future.

And he closes by noting that our understanding of the early universe will have to improve before we can answer these questions.

This one-wayness is interrelated with the fact that the ratchet [a model irreversible system discussed earlier in the chapter] is part of the universe. It is part of the universe not only in the sense that it obeys the physical laws of the universe, but its one-way behavior is tied to the one-way behavior of the entire universe. It cannot be completely understood until the mystery of the beginnings of the history of the universe are reduced still further from speculation to scientific understanding.

We’re still working on that.

114 Comments

114 thoughts on “Richard Feynman on Boltzmann Brains”

  1. Very nice!

    I’d like to say that I think the Boltzmann brain argument is stronger than requiring that our universe is not a simple statistical fluctuation toward low entropy. IF we assume the dark energy is vacuum energy and IF we assume the universe is closed — or IF we assume the universe is open or flat but the statistics of its events can be deduced by studying a large-but-finite comoving volume — then I think we have a Boltzmann brain problem UNLESS our universe can decay, that is unless there is a multiverse. (Page showed a rapid decay rate solves the problem without resort to spacetime measure, meaning the “multiverse” could only be two states: our dS and an AdS. But that seems less probable than there being transitions to other dS vacua, with the proper spacetime measure being such that normal observers like us dominate over the Boltzmann brains.)

    Perhaps I’ve missed something, though. One thing I haven’t thought about in detail is the effect of upward jumps to classical slow-roll inflation — though I’m under the impression these can’t solve the problem. (Jumps to eternal inflation certainly can — assuming appropriate spacetime measure — but then we have a multiverse.)

    One thing that disturbs me about this line of reasoning is it seems that I can deduce very significant properties of the “universe” without really leaving my office (though I must know the dark energy is vacuum energy).

  2. Consider the development of the periodic table via stellar nucleosynthesis. This apparent decrease in entropy can only be accounted for by the much larger losses of energy due to thermonuclear fusion, all the way up to iron, followed by supernova explosion-powered synthesis of the heavy elements, past iron.

    If we then look at the use of these elements by living systems, we see far more complexity – carbon, sulfur, nitrogen, iron, etc. – all play fundamental roles in life, which persists by using energy (from solar radiation, mostly) to reduce local entropy, even though universal entropy continues to increase.

    This is a very different situation from a universe in which there are no local entropy decreases. Imagine if the global entropy situation was mirrored in all local entropy situations – the dying embers of an explosion, for example. Stars and living systems have the remarkable ability to decrease local entropy (after dG = dH – TdS) – for more fun with that, see:

    http://www.2ndlaw.com/gibbs.html

    What I’m wondering is this: does the universe have surroundings, and can it exchange heat or mass with those surroundings?

  3. Mike– I think that it’s very plausible that the Boltzmann Brain argument has important consequences for eternally-accelerating cosmologies, but I think it’s important to distinguish that case from the original one considered by Boltzmann. In de Sitter, everything is infinite and you must make some assumptions about measures. The original Boltzmann case is finite — finite numbers of degrees of freedom, or finite dimensions in Hilbert space — and then the measure is perfectly unambiguous, so there is no room for argument. It’s important that everyone agree on this case if we’re to make any progress (and not everyone does).

    But basically I agree that the combination of positive vacuum energy + BB argument provides very good reason to believe in some sort of multiverse.

  4. I’ve never really bought into the argument, for the simple reason that there is no quantitative content. That is, within broad limits, the argument seems to apply no matter how small the ordered region is. Make it the size of the solar system, what could reasonably be observed in the seventeenth century, and the argument applies. Make it galactic-sized, say, what could be reasonably inferred in the 19th century, and the argument applies.

    Iow, the argument always applies . . . until it doesn’t. If there is some observational technique that reveals that the entire observable universe is just a tiny chip of order in a much larger sea of chaos, say, 10^2000 times larger, what happens then?

  5. It’s always helpful to see an idea expressed in a variety of ways. In fact, Feynman’s version makes me realize there’s one aspect of the Boltzmann Brain argument I don’t understand. We expect a thermal fluctuation to be the smallest possible fluctuation consistent with our observations — thus the brain that thinks it sees an ordered universe is more likely than an entire ordered universe. Yet a brain that thinks it sees an ordered universe in a sense has the same complexity as that universe: the subjective mind states are a model of a physical universe. When I ask myself whether I am a Boltzmann Brain, I am struck by the regularity that I perceive, and for such a perception of regularity to arise seems just as probable as the regularity itself.

    In other words, is it really true that the brain is overwhelmingly more probable than the entire universe? Sean, can you lead me out of the thicket I find myself in?

    George

  6. It is much simpler to arrange a low-entropy beginning than a messy high-entropy beginning. We should not be surprised to find a low-entropy beginning.

  7. George, it’s just that you’re not calculating entropy correctly. “The same complexity” is not something that is well-defined, while “the change in entropy” is. The change in entropy required to form a brain with an image of the universe is much smaller (and therefore much more likely) than the change in entropy required to actually make the corresponding macrostate of the universe.

    Think of Feynman’s example of a box of two different kinds of gas, which randomly fluctuates into a state where one gas is all on one side and the other is all on the other. The change in complexity (which we might roughly interpret as the change in the number of bits required to specify the state) is completely independent of the size of the box and the number of molecules of gas inside, but the required change in entropy is certainly larger for larger boxes.

  8. I want to understand it all, I really do; but I think this made my eyes bleed just trying to form images and concepts from these chaotic strings of words spewed forth into my head.

    Are we a mistake, is that what is is getting at? A cosmic accident?

    I though I was smarter than all this, but… Let me know when the ‘For Dummies’ version of these hypotheticals come out. I would love to be able to follow along.

  9. But the argument that the change in entropy is much smaller for a smaller ensemble is making an assumption about independence of events that is not warranted. Given a single improbable event, other improbable events become much more likely if in fact they are not independent. As an analogy, consider an empty box partitioned into two halves surrounded by a gas, and which has a valve that may or may not open on a probabilistic basis. Now, we can make the probability of opening as low as we like, say on the order of all the gas molecules being on one side of the box. But when it does happen, gas will indeed enter one side of the box and will be, briefly, in a low-probability case. Now, in the traditional experiment, the probabilities are indeed independent, and it makes some sense to invoke a Boltzmann Brain type of argument. In the latter case, with the extra valve, it very obviously does not.

    Is there any way to distinguish the two cases? For the life of me, I don’t see how this can be done with the current state of the art. But the point is that, as larger and larger systems are considered, it becomes impossible to prove or disprove the existence of the latter setup.

  10. Andrew, it’s not that we are a mistake — although that might be true. The problem is that the early universe looks very unnaturally ordered to us, and we’d like to understand why. One idea is that it was just a random fluctuation from a disordered collection of particles. But Feynman (and others) are pointing out that such an idea doesn’t really work once you examine it closely.

    So: to understand why the early universe was so ordered, we have to work harder. Nobody knows the final answer to that, although some of us have ideas.

  11. Sean, with respect to ‘order to disorder’, entropy and time’s arrow:

    Increase in entropy is characterized as typically being associated with increase in ‘disorder’ and a smoothing out and ‘reducing’ of temperature differences within an isolated system which is allowed to proceed to a state of equilibrium. Such increases usually occur spontaneously. ‘Paradoxically’, entropy increase often can be associated with large increases in local ordering, like the phase change of crystallization on cooling. This makes the ‘order to disorder’ characterization especially unsatisfying; not the most desirable way to describe increase in entropy.

    Born interpreted the square of amplitudes of the waves described by Schroedinger’s equation as probabilities. Shannon described information in terms of the logarithm of probabilities – and information has been ‘equated’ with entropy.

    In statistical mechanics, entropy is expressed as being proportional to the logarithm of the number of possible ‘arrangements’ of the ensemble of ‘particles’ constituting a system. Increase in entropy is associated with the statistical tendency for a closed system (of an ensemble of more than a ‘few particles’) to spontaneously transition from less probable to more probable states. This formulation seems to eliminate the ‘paradoxes’.

    In models of a “Cycling” steady-state universe or of oscillating universes, entropy (non-paradoxically) increases both during collapses ‘towards singularities’ as well as during expansion phases, and no zero entropy ‘origin’ is required.

    Doesn’t this suggest that agreeing to start with a primitive axiom having to do with ‘causality’, removing the apparent ‘reversibility’ of classical and quantum physics, might unambiguously set the direction of time’s arrow?

    (I’ve put this question to you previously – without any response.)

  12. Actually, Sean, they aren’t pointing out any such thing, and, insofar as I know, I don’t think anyone proposes that the (very)early universe was just a random collection of disordered particles.

  13. Leonard, I don’t understand what you are saying, so I can’t sensibly comment. Using “disorder” as a gloss on “entropy” is by no means perfect, but it’s a convenient shorthand.

    Scent, all I can suggest is that you read the post carefully again.

  14. What about the role of gravity—more precisely, gravitational collapse—in all this? The classical thermodynamic arguments took no account of it. The canonical case of “heterogeneous gas in a box” always strikes me as suspiciously unrepresentative of the actual universe. How does a system that has evolved to collapsed to a higher entropy, lower energy state (in the extreme case, a black hole) fluctuate away from it?

    For that matter, classical statistical mechanics and condensed matter—considered as an low energy, stable endpoint of a non-equilibrium process (which is fundamentally quantum mechanical)—seem to co-exist uneasily.

    More generally, the actual universe is not in equilibrium. One can respond, “right, it’s the result of a fluctuation, and is returning to equilibrium”. Actually, that’s an odd remark; saying that it is a fluctuation presupposes a state of statistical equilibrium within which the fluctuation occurs. The appearance and relaxation of the fluctuation isn’t a return to equilibrium, it’s merely associated with its inherently statistical character.

    Do you see what I’m struggling with here?

  15. I have. I think you’re reading way too much into what is really a rather vapid – and anthropic – argument. And I repeat – I don’t know of anyone who seriously thinks that the very early universe was just ‘some collection of random particles’. And not because of some Boltzmann Brain type of argument, I might add. Saying this smacks of rhetoric more than a real argument.

  16. Either (a) I’m missing something or (b) a lot of very smart scientists are very confused about basic probability.

    The flaw in the Boltzmann brain paradox, as I understand it, is that the argument boils down to this: “The Random Fluctuations theory implies that there’s just a one in a gazillion chance that, out of all possible random fluctuations, one like ours would result because smaller fluctuations are much more likely. But here we are, and that’s incredibly unlikely — so the theory must be wrong, refuted by experiment.”

    But that doesn’t follow at all. The Boltzmann Brains paradox is an argument about *probabilities* of outcomes, and you can’t deduce anything from just one trial.

    I would understand the Boltzmann brain argument if we could run many experiments (or equivalently, observe many universes) and count the outcomes: if we got lots of Boltzmann Brain universes and very few like ours, we would lean towards believing our universe is indeed a lucky random fluctuation. If not, if instead we get many more like ours than predicted, we discard the random fluctuation theory *because it failed the experiment*. BUT… and it’s a really big BUT… we don’t have a lot of experiments. We have just one, i.e. the universe we observe. And in an experiment with probabilistically distributed outcomes, that doesn’t tell us anything. (By contrast, if we were talking about a test with a deterministic outcome, one failure to observe a Boltzmann universe would indeed be all the counterexample we need).

    Maybe cosmologists are so used to working with a sample size of one that they need to review some basic probability before they are let loose on the larger sample of the multiverse?

    Or am I misunderstanding something?

  17. If you are, Carl, then so am I. It also looks like there is some confusion regarding conditional probabilities.

  18. CarlZ– yes, you are misunderstanding, I’m afraid. The argument (which is right up there in the post, honestly) is *not* that we are here, which is unlikely. It’s that, given that we are here (and given any feature you think we know about the universe), the statistical-fluctuation hypothesis makes a very strong prediction: namely, that every time we look somewhere in the universe we haven’t looked yet, we should see thermal equilibrium. And we don’t, so that hypothesis is falsified.

    Chris W.– these are perfectly good things to worry about. We don’t understand how to calculate entropy in the presence of gravity, because we don’t understand the space of microstates. But the good news is that arguments from statistical mechanics don’t depend sensitively on the specific dynamics of the theory under consideration. Just on basic principles like unitarity and time-independence of the Hamiltonian. Those might not ultimately hold in the real world, but they are all you need to make these arguments.

  19. Thanks, Sean. That also reminds one of what is at stake in the question of information loss in black holes.

  20. Sean…

    “It’s that, given that we are here (and given any feature you think we know about the universe), the statistical-fluctuation hypothesis makes a very strong prediction: namely, that every time we look somewhere in the universe we haven’t looked yet, we should see thermal equilibrium. And we don’t, so that hypothesis is falsified.”

    I’m not sure that really falsifies the hypothesis. The statistical hypothesis predicts that generically we should expect to see equilibrium, but that equilibrium won’t hold absolutely everywhere. Failing to see equilibrium everywhere doesn’t falsify the hypothesis. It’s certainly an issue that needs more consideration, but it doesn’t kill the statistical hypothesis stone dead. Maybe whatever mechanism it is that governs thermal fluctuations guarantees that regions of disequilibrium clump up?

    I was wondering, relatedly, how the odds stack up when you compare the probability of a Boltzmann Brain arising via random fluctuations–complete with all the time-dependent dynamical interconnections making up a whole lifetime’s worth of human mental processing–with that of not a whole specified universe like our own, but just some roughly-suitable mostly-undifferentiated Big Bang blob? That is, how much fluctuation do you really need to kick things off before other physical processes step in and start governing the time-evolution of your system in a non-thermal way? Could it be that suitably-large fluctuations generically lead to Big Bang states and that the laws of nature will then generically evolve these into interesting universes?

  21. The first time that I saw Sean’s argument on Boltzmann’s Brains, it went something like this (as I recall): by the statistical fluctuation hypothesis, we should be Boltzmann’s Brains – but we’re not, QED. A lot of the counter-arguments given above occurred to me then. I follow Feynman’s argument much better, but it stills seems to me there is a conditional probability issue, as SoV says. How do we know it is easier (more likely) to create a Boltzmann’s Brain or a single Solar System by random fluctuation than it is to create the conditions under which solar systems and brains can evolve?

  22. I don’t see the logic that says we have experimental evidence against the idea of the Boltzmann Brain – If my brain is BB existing as a statistical fluctuation for a moment, then all my memories of experiments are just even more ephemeral traces in that BB. The real argument against a BB is philosophical – if we believe it science is impossible. The BB is essentially solipsist, and is sterile for the same reason.

    It is curious, though, that the cosmic billiard balls started out so neatly racked. Is it possible to understand this in any deep sense? Only if you know something about the rack – or the racker.

    Now if we wanted to study this experimentally, we might build a very detailed model of the universe, start the billiard balls off neatly racked, and watch it evolve. Perhaps any intelligent beings – or even any BBs – that evolved might have a better insight.

  23. JimV– we know it is more likely just by the standard arguments of statistical mechanics. A state of the form “brain + thermal equilibrium” is much higher entropy than a state of the form “brain + planet + solar system etc.” Which means that there are many more microstates of the former kind than the latter kind. Which means there are many more trajectories in phase space that pass through states of the former kind than pass through states of the latter kind. Which means, if conventional statistical mechanics is to be believed, that states of the former kind are much more likely to arise via random fluctuations.

    So one of the assumptions of the model — bounded phase space, eternal evolution, microscopic irreversibility, time-independent Hamiltonian — must be false. It’s interesting to try to figure out which one.

  24. CIP, there is something to that. One could always argue that there is some probability to fluctuate into the state of a brain that has a complicated set of (completely false) memories consistent with being embedded in a large low-entropy universe. Of course there are many more likely things to fluctuate into, even if we restrict our attention to brains, but it is possible. Such a brain, of course, would have no reliable knowledge of the universe, so that scenario is cognitively unstable — even if it’s true, there would be know way of knowing it. And certainly nobody is going to behave as if it is true.

  25. The “Lectures on Physics” are supposed to be for undergraduates, but I always use it as a great reference. Also, it really is a treasure trove, as that gem of Feynman’s arguments about the arrow of time indicates. All of these paragraphs are in the “Ratchet & Pawl” chapter, in Vol. 1 (Ratchet & Pawl? Weird title, but like almost all of Feynman’s stuff very worth reading).

Comments are closed.

Scroll to Top