From Eternity to Book Club: Chapter Eight

Welcome to this week’s installment of the From Eternity to Here book club. Finally we dig into the guts of the matter, as we embark on Chapter Eight, “Entropy and Disorder.”

Excerpt:

Why is mixing easy and unmixing hard? When we mix two liquids, we see them swirl together and gradually blend into a uniform texture. By itself, that process doesn’t offer much clue into what is really going on. So instead let’s visualize what happens when we mix together two different kinds of colored sand. The important thing about sand is that it’s clearly made of discrete units, the individual grains. When we mix together, for example, blue sand and red sand, the mixture as a whole begins to look purple. But it’s not that the individual grains turn purple; they maintain their identities, while the blue grains and the red grains become jumbled together. It’s only when we look from afar (“macroscopically”) that it makes sense to think of the mixture as being purple; when we peer closely at the sand (“microscopically”) we see individual blue and red grains.

Okay cats and kittens, now we’re really cooking. We haven’t exactly been reluctant throughout the book to talk about entropy and the arrow of time, but now we get to be precise. Not only do we explain Boltzmann’s definition of entropy, but we give an example with numbers, and even use an equation. Scary, I know. (In fact I’d love to hear opinions about how worthwhile it was to get just a bit quantitative in this chapter. Does the book gain more by being more precise, or lose by intimidating people away just when it was getting good?)

In case you’re interested, here is a great simulation of the box-of-gas example discussed in the book. See entropy increase before your very eyes!

Explaining Boltzmann’s definition of entropy is actually pretty quick work; the substantial majority of the chapter is devoting to digging into some of the conceptual issues raised by this definition. Who chooses the coarse graining? (It’s up to us, but Nature does provide a guide.) Is entropy objective, or does it depend on our subjective knowledge? (Depends, but it’s as objective as we want it to be.) Could entropy ever systematically decrease? (Not in a subsystem that interacts haphazardly with its environment.)

We also get into the philosophical issues that are absolutely inevitable in sensible discussions of this subject. No matter what anyone tells you, we cannot prove the Second Law of Thermodynamics using only Boltzmann’s definition of entropy and the underlying dynamics of atoms. We need additional hypotheses from outside the formalism. In particular, the Principle of Indifference, which states that we assign equal probability to every microstate within any given macrostate; and the Past Hypothesis, which states that the universe began in a state of very low entropy. There’s just no getting around the need for these extra ingredients. While the Principle of Indifference seems fairly natural, the Past Hypothesis cries out for some sort of explanation.

Not everyone agrees. Craig Callender, a philosopher who has thought a lot about these issues, reviewed my book for New Scientist and expresses skepticism that there is anything to be explained. (A minority view in the philosophy community, for what it’s worth.) He certainly understands the need to assume that the early universe had a low entropy — as he says in a longer article, “By positing the Past State the puzzle of the time asymmetry of thermodynamics is solved, for all intents and purposes,” with which I agree. Callender is simply drawing a distinction between positing the past state, which he’s for, and trying to explain the past state, which he thinks is a waste of time. We should just take it as a brute fact, rather than seeking some underlying explanation — “Sometimes it is best not to scratch explanatory itches,” as he puts it.

While it is absolutely possible that the low entropy of the early universe is simply a brute fact, never to be explained by any dynamics or underlying principles, it seems crazy to me not to try. If we picked a state of the universe randomly out of a hat, the chances we would end up with something like our early universe are unimaginably small. To most of us, that’s a crucial clue to something deep about the universe: it’s early state was not picked randomly out of a hat! Something should explain it. We can’t be completely certain that such an explanation exists, but cosmology is hard enough without choosing to ignore the most blatant clues that nature is sticking under our noses.

This chapter and the next two are the heart and soul of the book. I hope that the first part of the book is interesting enough that people are drawn in this far, because this is really the payoff. It’s all interesting and fun, but these three chapters are crucial. Putting it into the context of cosmology, as we’ll do later in the book, is indispensable to the program we’re outlining, but the truth is that we don’t yet know the final answers. We do know the questions, however, and here is where they are being asked.

32 Comments

32 thoughts on “From Eternity to Book Club: Chapter Eight”

  1. Sean, suppose we did a log-normal graph of the values on your chart on page 152. The y axis is log W (k) and x axis the 2000 arrangements of the system. That would give us a nice symmetrical distribution with a y maximum at 600.3. The entropy of the system for every molecular arrangement (p,q) is a point on this S-distribution. We know the entropy of the system will fluctuate but almost always stay close the maximum. Over some intervals of time entropy will almost certaintly decline temporarily. That can’t be what the 2nd Law forbids. What it forbids must be that the shape (“moments”) of the S-distributions will not change. In particular, that S maximum will never decrease. Is that right? So the entropy of a closed system refers to a distribution whose maximum (the 2nd Law says) can’t decline?

    .

  2. It’s not that the 2nd Law prohibits fluctuations downward in entropy from equilibrium; Boltzmann’s picture predicts that such fluctuations will certainly happen, as will be discussed in Chapter Ten (p. 212). The 2nd Law is not any sort of statement about fluctuations around equilibrium; it’s the statement that if you start with entropy much lower than equilibrium, the entropy is overwhelmingly likely to increase, as illustrated in Fig. 43. Plus, of course, the extra ingredient that our universe actually has an entropy much lower than equilibrium, which follows from the Past Hypothesis.

  3. The mixed sand analogy is really good. One thing becomes another when we change our frame of reference. The purple color is a product of observing mixed sand from a distance…a very simple but profound idea.

    I think it is worth noting that everything in this illustration is couched in space and time- entropy included. Mixing requires time and the mixture must be observed in a certain way for it to be “purple” (it really is purple, of course…from that frame and observed electromagnetically).

    This is really quite a book! By the way, I don’t agree that attempting rational (and workable) explanations for what we find in our reality is quite the equivalent of “scrathing an itch”.

    What is, is (not an insignificant tautology). Mankind has been coping or attempting to cope with “what is” since mankind became mankind. All life adapts.

    Stiving to understand the universe and building technology on an increasingly complete understanding of “what is” is a challenging and amazing journey…a journey even the most “primative” humans instinctively engage in.

  4. It is hard for me to accept that the fact that the universe started in a low entropy state has any implications on my memories. Your argument is a very good read; but with all due respect, intuitively something must be wrong. My brain seems far too tiny relative to the mass of the visible universe.

    Moreover, my mind supervenes on my brain which supervenes on neurons which supervene on chemistry which supervenes on physics. There are at least a few levels there. Each time we transition to another level, there is a different useful definition of entropy. I guess I am saying that for each level removed from fundamental physics, I would think that a robust fundamental definition entropy becomes less and less significant to the point of irrelevance.

  5. Aside from the conceptual difficulties with entropy there are a few mathematical issues.

    1. It is statistically tautological to say entropy will increase. In terms of the statistical definition of entropy one is saying that the most likely observation is the most likely observation.

    2. The standard definition of entropy does not have a well defined limit to continuity, one has to use the Kullback-Leibler Divergence which actually compares two distributions, which leads to messy interpretations as to the meaning of the reference distribution.

    3. In the standard example of an ensemble of particles with a fixed energy, the time evolution leads to all the particles having nearly the same energy, which maximizes the entropy when counting quanta of energy assigned to each particle, but minimizes entropy when counting particles assigned to each energy state.

    These difficulties puzzle me.

  6. Coming from biology, this reminds me of Origin of Life studies.

    Maybe it was an extraordinarily unlikely event.

    Maybe it involves specific (not yet known) steps that make it a little less unlikely.

    If you insist on the former, then you are done. No need to make any further hypotheses or do any science. If you insist on the latter, then you have a job trying to figure out what those steps might have been. Either might be true, but only the latter will (or might) lead to productive research.

    (Of course, spontaneous life now seems positively probable to me after reading about spontaneous universes.)

  7. *Surely* there must be more to Callender’s position than just “some itches don’t need to be scratched”? He may well be a respected philosopher, but this idea is just so ridiculous that I feel the need to search for a deeper explanation of his position. Or should I not try to scratch that itch, and simply accept the fact that philosophers sometimes just like to make fools of themselves?

  8. Pingback: From Eternity to Book Club: Chapter Eight « Thoughts About Changes In Time

  9. Clifford– If the universe hadn’t started out in a low entropy state, with overwhelming probability the universe would be close to equilibrium and you would be a random fluctuation. From that, everything else basically follows. We’ll talk a bit more in the next chapter.

    Aaron– It’s not a tautology to say that entropy will increase, because it’s not even true. I suspect that you are using a different definition of entropy than the one I’m using in the book (S = k log W). No definition is right or wrong, but this is the best one to use if you want to talk about the arrow of time in the real world.

    Charlie– You could imagine that the beginning of the universe was simply an unlikely event. The problem is, it’s far more unlikely than it would need to be for any anthropic (or other known) criterion. So some extra explanation seems to be called for.

    Timon– You can read the longer paper, linked to in the post, and decide for yourself. I agree with Callender that there are likely to be some brute facts about the universe that don’t have a “deeper” explanation. I just disagree that the low entropy of the Big Bang is one of them, as does almost every other philosopher who has written on the subject.

  10. I think I agree with Clifford that intuitions about memory need a more detailed explanation than “you couldn’t have an asymmetric memory if entropy wasn’t low in the past.” which is how I read Sean’s response. I’d like to see more along the lines of explaining how entropy is sufficient, not just necessary, to explain all the salient features of memory that we observe.

    Some subtleties that I think need to be addressed include:
    1) what do we really mean that we remember the past but not the future? How is remembering the past different from using science to predict the future (weather forecasts, predicting eclipses etc.) ?

    2) It would be nice to think of memories as a degraded remnant of structured information about events in the past, and then to conclude that since there is more structure in an event than a memory of that event, turning the event into a memory would be an increase in entropy, but turning the memory into an event would not. However, we know that the cooling and expansion of the universe does allow structure (nonrandom physical information) to increase even as entropy (random physical information) increases.

    The closest thing I, personally, can come up with to an explanation — in an expanding universe, entropy considerations do not forbid a computer running a Laplace demon type program, computing the history of the universe as a whole, but they do forbid the computer from calculating the state of the universe before that state is realized by the universe itself. And, I guess you can sort of think of this computer as a super-memory since it keeps a *complete* record of the past down to the smallest atom.

    Oh well. I do agree with Sean’s general point that the reason the past is different from the future ultimately boils down via entropy to the cosmological arrow of time. But filling in the details is harder than it looks. I think a full explanation will need to give a description of what our memory is really doing, explain how that process looks different in reverse, explain why the reverse is physically impossible or improbable, and explain how the forward direction is both possible and a probable consequence of Darwinian evolution.

  11. The definition is the standard generalization: -sum p log p, which is additive over dimensions and is maximized when the variance is maximized, for countable distributions that have a finite second moment. If you are microstate counting then the value of the bulk property with the most microstates is the most likely, especially when all the microstates are equally weighted.

    One has to be exceedingly careful when making claims about measures (distributions) on infinite dimensional spaces, especially when one is making a claim based on a limit of finite dimensional spaces. In an infinite dimensional setting things like norms (distances, topologies) will not agree with things like measures (volumes, probabilities) the way they do in finite dimensional spaces.

    Were the other two points dismissed out right?

  12. What is the entropy of antimatter? I seem to remember Feynman saying that antimatter was like regular matter reversed in time. So is its entropy “arrow” consistent with time going forward or backward? What does the 2nd law say about it?

  13. The question as to why the universe started in a low-entropy state is probably the most important question in science right now. Many folks have picked up on it in the last 10 years or so, but Penrose was pointing this out more than 30 years ago.

  14. Ray– We should wait until the next chapter, when we talk about memory in a bit more detail. But not that much detail, I admit. However, the expansion of the universe doesn’t have anything to do with it; the space of allowed states doesn’t expand, and that’s what matters.

    Aaron– I’m not using that definition, since there’s no need to for any of what I’m discussing. We proceed by assuming that the universe is in some particular microstate, even if we don’t know what it is, and the entropy is a property of the macrostate to which it belongs. Also, none of the relevant spaces are infinite dimensional. (Even if we’re doing quantum mechanics, you can describe what happens within a comoving patch of space with a finite-dimensional Hilbert space.) So these issues just don’t apply.

    Graham– There’s not really any difference between antimatter and any particular species of matter, as far as entropy is concerned. Also, there’s very little antimatter compared to matter in the observable universe.

  15. Looking at your graph on page 177 with the low entropy spike, my immediate thought is that spike would correspond to the big bang being a statistical fluctuation. You write about how we can’t accept such a spike because our memories would be unreliable, but I don’t see how that’s a problem if our memories are only unreliable ‘before’ the big bang. We could still make sense of everything after the big bang.
    You allude to talking more about this in the next chapter, which I haven’t got to yet, so if the answer is ‘keep reading’ then that’s fine.

  16. “If we picked a state of the universe randomly out of a hat, the chances we would end up with something like our early universe are unimaginably small.”

    This has always puzzled me, as I don’t see why, given the big bang, such a state is seen as so unlikely. Certainly, if we leave the big bang out of it, and just consider all possible states for the early universe, the chances of something like our early universe being selected is extraordinarily unlikely. But would it be possible for the big bang to not have had low entropy? To my mind, if it didn’t, the big bang would no longer resemble anything like a big bang. That is, it seems to me that given the big bang, laws of physics and constants, such a low entropy state is to be expected. The more pertinent question becomes how the big bang?

  17. Re Aaron Sheldon’s third point, isn’t it just a matter of there being many more quanta than particles? Or am I not understanding the problem.

  18. Some thoughts (questions?) on the “principle of indifference”. In a gas, the forces between particles are neglible, so the principle seems intuitively OK. In a system of gravitating particles, however, the attractive force seems to give preference to states with lower net potential – i.e. states that are more collapsed toward the center of mass. Hence it seems the principle does not apply to gravitating systems.

    Even in a gas, a spread out configuration seem more probable than a concentrated one (all particles in one corner of the box) if for no other reason than the mean free path is larger, so the number of collisions is reduced. When you try to concentrate the particles into one corner, the mfp becomes small and the number of collisions goes up, making such a configurations less likely than a more spread out one?

  19. “the space of allowed states doesn’t expand, and that’s what matters.”

    I’m confused by this sentence. Did you mean to say the opposite, or are you accusing me of assuming some kind of non-unitarity?

    Anyway, I didn’t mean to imply that the entire space of states for the universe is expanding (this probably doesn’t even make sense, since the universe in the broadest sense is probably infinite.)

    I meant that in a given Hubble volume there are more and more allowed microstates whose macrostate is consistent with the general story: “the universe has been expanding since the big bang, and the initial fireball didn’t contain any large black holes.” (Large black holes in the early universe would foil the Laplace Demon program, since there’s no way for the demon to know what’s inside of them until they evaporate — which takes a long time.) The point is that this simplifying assumption restricts the space of allowed states more, the closer you get to the big bang.

    As far as the expansion of the universe being important, I suppose the Laplace demon argument would still work if the simplifying assumption was something else, but in our case it does seem that it ultimately derives from big-bang cosmology — so I don’t see why it isn’t relevant. Didn’t you have an entire chapter on it?

  20. I’ll see your finite Hilbert space, and raise you a trump card of: the classical observables of momentum and position do not exist on finite Hilbert spaces. Actually, technically their Lie commutator cannot be unital (take the trace of the commutator, zero on one side, a constant on the other).

    So if you are not working with the entropy and energy of momentum and position, then what are you working with?

  21. Jason A.– Actually that issue is covered in great detail two chapters from now (Ch. 10). A very low spike could be the Big Bang, but the probability would be enormously greater that we would live in a much smaller spike.

    CF– You could very (very) easily have had a Big Bang with much higher entropy. It would have been extremely inhomogeneous, not at all smooth. More later on this, as well.

    drm– It’s hard to think of the physical system describing the universe as being “at fixed energy” when we take gravity into account. See previous post!

    Metre– The existence of gravity changes the way you would naively count states. When things are bunched together, there are actually more states of that form than if things were spread randomly. That’s not completely surprising; a similar thing happens in oil and water, where there are more states when the two liquids are separate than when they are fully mixed.

    Ray– Yes, I’m accusing you of non-unitarity. Of course “there are more and more allowed microstates whose macrostate is consistent with the general story,” if by “the general story” you mean the kind of evolution we actually observe — that’s just a restatement of the reality of the Second Law. But it’s not right to exclude highly inhomogeneous states by fiat, or just because information would be hidden behind horizons. This is Part Four kind of stuff, but the underlying assumption is that the full evolution is completely unitary, even when gravity is taken into account. So for every possible microstate in the current macrostate of the universe, there is exactly one microstate of a much denser (higher Hubble parameter) universe from which it could have evolved — that’s the content of “unitarity.” Most of them would have white holes and wild inhomogeneities. But even without such exotica, there are still a lot of very lumpy states that are inconsistent with the extreme smoothness of the early universe as we find it.

  22. Aaron– At this point I’m just working in a classical approximation, so it doesn’t matter. Of course behind that is some quantum model. If I used that language, obviously we wouldn’t talk about positions and momenta, but about wave functions.

  23. Like Ray above, I too was a bit confused by the statement:

    “the space of allowed states doesn’t expand, and that’s what matters.”

    Suppose I squeeze a gas into a small volume in a cylinder with a piston then let it come to equilibrium. Then I pull the piston out rapidly (rapidly expanding the volume). The gas is now concentrated at the bottom and no longer in equilibrium because the space of states (maximum allowable entropy) has increased. The gas will expand into the volume until it comes to a new equilibrium at a much higher entropy. By pulling the piston out rapidly, I did not change the entropy of the gas, but I changed the maximum allowable entropy, so the actual entropy was now low wrt the new maximum.

    The universe was initially squeezed up into a singularity (or something close) like the gas in the piston at maximum entropy for that configuration. The big bang acted like a sudden pulling out of the piston, rapidly increasing the maximum allowable entropy. The actual entropy of the universe didn’t change, but it was no longer at maximum; it was low wrt the new maximum. Obviously, your statement disagrees with this view. I haven’t read the final chapter of the book yet, so I don’t know what your answer is.

  24. Metre– That’s right about the piston, but only because it’s an external influence, not part of the system itself. The same logic doesn’t apply to the Big Bang, because the expansion of the universe is governed by the metric, which itself a dynamical degree of freedom. You have to take the gravity into account when counting the states.

Comments are closed.

Scroll to Top