The Arrow of Time: Still a Puzzle

A paper just appeared in Physical Review Letters with a provocative title: “A Quantum Solution to the Arrow-of-Time Dilemma,” by Lorenzo Maccone. Actually just “Quantum…”, not “A Quantum…”, because among the various idiosyncrasies of PRL is that paper titles do not begin with articles. Don’t ask me why.

But a solution to the arrow-of-time dilemma would certainly be nice, quantum or otherwise, so the paper has received a bit of attention (Focus, Ars Technica). Unfortunately, I don’t think this paper qualifies.

The arrow-of-time dilemma, you will recall, arises from the tension between the apparent reversibility of the fundamental laws of physics (putting aside collapse of the wave function for the moment) and the obvious irreversibility of the macroscopic world. The latter is manifested by the growth of entropy with time, as codified in the Second Law of Thermodynamics. So a solution to this dilemma would be an explanation of how reversible laws on small scales can give rise to irreversible behavior on large scales.

The answer isn’t actually that mysterious, it’s just unsatisfying. Namely, the early universe was in a state of extremely low entropy. If you accept that, everything else follows from the nineteenth-century work of Boltzmann and others. The problem then is, why should the universe be like that? Why should the state of the universe be so different at one end of time than at the other? Why isn’t the universe just in a high-entropy state almost all the time, as we would expect if its state were chosen randomly? Some of us have ideas, but the problem is certainly unsolved.

So you might like to do better, and that’s what Maccone tries to do in this paper. He forgets about cosmology, and tries to explain the arrow of time using nothing more than ordinary quantum mechanics, plus some ideas from information theory.

I don’t think that there’s anything wrong with the actual technical results in the paper — at a cursory glance, it looks fine to me. What I don’t agree with is the claim that it explains the arrow of time. Let’s just quote the abstract in full:

The arrow of time dilemma: the laws of physics are invariant for time inversion, whereas the familiar phenomena we see everyday are not (i.e. entropy increases). I show that, within a quantum mechanical framework, all phenomena which leave a trail of information behind (and hence can be studied by physics) are those where entropy necessarily increases or remains constant. All phenomena where the entropy decreases must not leave any information of their having happened. This situation is completely indistinguishable from their not having happened at all. In the light of this observation, the second law of thermodynamics is reduced to a mere tautology: physics cannot study those processes where entropy has decreased, even if they were commonplace.

So the claim is that entropy necessarily increases in “all phenomena which leave a trail of information behind” — i.e., any time something happens for which we can possibly have a memory of it happening. So if entropy decreases, we can have no recollection that it happened; therefore we always find that entropy seems to be increasing. Q.E.D.

But that doesn’t really address the problem. The fact that we “remember” the direction of time in which entropy is lower, if any such direction exists, is pretty well-established among people who think about these things, going all the way back to Boltzmann. (Chapter Nine.) But in the real world, we don’t simply see entropy increasing; we see it increase by a lot. The early universe has an entropy of 1088 or less; the current universe has an entropy of 10101 or more, for an increase of more than a factor of 1013 — a giant number. And it increases in a consistent way throughout our observable universe. It’s not just that we have an arrow of time — it’s that we have an arrow of time that stretches coherently over an enormous region of space and time.

This paper has nothing to say about that. If you don’t have some explanation for why the early universe had a low entropy, you would expect it to have a high entropy. Then you would expect to see small fluctuations around that high-entropy state. And, indeed, if any complex observers were to arise in the course of one of those fluctuations, they would “remember” the direction of time with lower entropy. The problem is that small fluctuations are much more likely than large ones, so you predict with overwhelming confidence that those observers should find themselves in the smallest fluctuations possible, freak observers surrounded by an otherwise high-entropy state. They would be, to coin a pithy phrase, Boltzmann brains. Back to square one.

Again, everything about Maccone’s paper seems right to me, except for the grand claims about the arrow of time. It looks like a perfectly reasonable and interesting result in quantum information theory. But if you assume a low-entropy initial condition for the universe, you don’t really need any such fancy results — everything follows the path set out by Boltzmann years ago. And if you don’t assume that, you don’t really explain our universe. So the dilemma lives on.

97 Comments

97 thoughts on “The Arrow of Time: Still a Puzzle”

  1. Have people used anthropic thinking to claim that that explains why the early universe had a low entropy? It seems like if you say that it does, everything else falls into place as you can calculate the rate at which entropy increases within the standard model + GR and show that Boltzman brains are not likely. I’m not a fan of the anthropic principle for the same reasons as everybody else, but if you admit that it is a possibility, you might as well get as much mileage out of it as you can.

  2. Michael, I don’t know enough about what that article is describing to have an informed opinion. (E.g. I don’t know what “electromagnetic fields have a known handedness” means.) But on the larger issue, violation of time-reversal symmetry is well-understood, and an important part of the weak interactions in particle physics. That’s slightly different than a violation of reversibility, which is at the heart of the arrow of time.

  3. confused asker of stupid questions

    Can I ask three sets of stupid questions?

    1) If the boundary condition we call the early universe was very low entropy, why didn’t it *stay* low entropy? That is, what causes entropy to increase? Why is entropy increasing practically uniformly across huge volumes of space and over long periods of time?

    2) If the boundary condition we call the late universe will have very high entropy, right down to isolated black holes and stable elementary particles (whee, fun, which is their timelike dimension then?), what reasons do we have for it to stay that way?

    3) What is the macrostate:microstate relationship of the dark sector, especially dark energy, at each boundary condition compared to each other and the present local universe? How do we combine dark entropy with the entropy of visible matter?

  4. These aren’t stupid questions, but the first two were exactly what Boltzmann worked out long ago. The point about low-entropy states is that there aren’t many of them; the entropy is simply the logarithm of the number of states that are macroscopically indistinguishable. So time evolution naturally takes low-entropy states to high-entropy ones, as there are many more high-entropy states to evolve to. Conversely, high-entropy states tend to evolve to (other) high-entropy states, which look macroscopically the same. All that is true for the universe as well as for an egg or a box of gas.

    We don’t know what dark energy is, but if it’s a cosmological constant, it doesn’t have any entropy at all. But spacetime itself does, and that entropy can be very large. Matter just goes along for the ride when its self-gravity is not important (as in the early universe), but things become complicated and ill-understood when gravity is important (as in the current universe), except when it takes over completely (as in black holes). Then we have a formula from Hawking that tells us the entropy exactly, and it’s a huge number.

  5. “So time evolution naturally takes low-entropy states to high-entropy ones, as there are many more high-entropy states to evolve to.”

    But if the laws of physics are time-invariant, then shouldn’t that happen in *both* temporal directions? Given a certain amount of entropy at time t, shouldn’t there be more entropy at both t+1 AND t-1? Even if entropy were increasing, why would it have to be *monotonically* increasing, unless there was actually a law acting at every point in time, as opposed to merely an initial condition?

  6. A couple of quick thoughts:

    First there is an interesting connection between the idea of a quantum trail of information and Schrödinger’s Cat, namely that the cat isn’t in a perfect superposition of alive and dead during the experiment, because after the box has been opened one can do a post-mortem autopsy if the cat is dead to determine (roughly) the time of death. That is there is a quantum trail of information encoded in the final state of the cat.

    Second, it is not true that quantum physics is in general invariant to time inversion, one needs to be more precise and state that quantum mechanics is inversion invariant only if the space-time manifold is unbounded (either closed or open is fine). On the other hand, if there is a boundary in the manifold then quantum mechanics (specifically shift operators, which are the Lie Algebra generated by exponentiating differential operators) is not invariant with respect to time inversion. Sure it is a bit of a legal loop hole, but its an important one.

  7. Yes, and that’s part of the puzzle. Given a low-entropy condition at some time, and no other information, you would expect entropy to grow to both the past and the future of that moment. Of course this is not a worry if that moment is truly an initial condition, as there is no “past” of that moment.

  8. I read the paper too and have a pretty elementary question but one that has been bugging me.

    The paper seems to say if entropy were to decrease our memories would be erased so we couldn’t know about such processes. However, we can know about processes where entropy increases hence we can “remember” the past. fine.

    But why does entropy only increase in one direction? If there are “time-reversal” symmetries to nature, why is there a preferred direction to the increase in entropy? To me this paper makes the question go from “why does time only flow in one direction despite time reversal” to “why does entropy increase in only one direction despite time reversal.”

    What am I overlooking?

  9. If “phenomena where the entropy decreases must not leave any information of their having happened”, then entropy could be decreasing right now as much we perceive it to be increasing and we wouldn’t know it.

  10. It appears others are posting similar comments that didn’t exist before I posted my comment, but your blog makes comments sit in the queue for several minutes so it looks like I am just repeating previously asked questions. Sorry for that. And by the way, great post! 🙂

  11. That entropy increases ‘in distribution’ is actually just one form of a Central Limit Theorem. And in fact entropy would increase ‘in distribution’ in any direction of translation as long as the number of quantum observables is sufficiently large.

    The interesting part is that the questions ‘why does time move in one direction?’ and ‘why is there a beginning to time?’ are equivalent, at least in quantum theory.

  12. @Sean [8] “The point about low-entropy states is that there aren’t many of them; the entropy is simply the logarithm of the number of states that are macroscopically indistinguishable.”

    Won’t that in a sense be the case for the universe in the distant future, assuming it doesn’t collapse and ends up comprising only extremal/eternal black holes? Of course one has to “zoom out” so to speak, and somewhat fictitiously regard each black hole as a single “unit”, disregarding its intrinsic entropy in the usual conventional sense, rather like the details of a fractal at some given scale fade away into insignificance as the scale increases and a fresh pattern comes into view, and this can be continued indefinitely.

    Also, you mentioned the collapse problem in passing. Do you know if anyone has seriously considered turning this problem round, and positing that wave functions are highly
    dissipative and constantly collapsing but that narrow-width “spikes” constantly arise by some means and regenerate the wave function (unless it collapses in the conventional way), in other words assuming that the problem is not what causes collapse but
    what *prevents* it, and seeing where that might lead? May sound a bit kookish, but no end of major scientific advances have been attained by trying, usually reluctantly, the very opposite of long cherished assumptions.

  13. At the risk of sounding trite, but the only thing that prevents wave function collapse is willful ignorance.

  14. It isn’t really true that he forgets about cosmology. He discusses it at the very end:

    “In a quantum cosmological setting, the above approach easily fits in the hypothesis that the quantum state of the whole Universe is a pure (i.e., zero entropy) state evolving unitarily (e.g., see [29,30]). One of the most puzzling aspects of our Universe is the fact that its initial state had entropy so much lower than we see today, making the initial state highly unlikely [4]. Joining the above hypothesis of a zero-entropy pure state of the universe with the second law considerations analyzed in this Letter, it is clear that such puzzle can be resolved. The universe may be in a zero-entropy state, even though it appears (to us, internal observers) to possess a higher entropy. However, it is clear that this approach does not require dealing with the quantum state of the whole Universe, but it applies also to arbitrary physical systems”

    I don’t really understand this “quantum state of the whole Universe” stuff, or how “internal observers” could think that the universe is not in a pure state, even though it really is. Leaving that aside, he seems to be trying to shift the question from from “why did the early universe have low entropy” to “why is the universe in a pure state”. But even then don’t you still have the question of “why is the particular pure state of the universe such that the early universe appeared (to internal observers) to have low entropy?”. Shouldn’t all the usual arguments imply that such a pure state is very unlikely?

  15. Also, I’m no expert in this area, but my recollection is that the stat mech argument that entropy will always increase isn’t 100% rigorous. In particular, it relies on an ergodic hypothesis that all states are equally likely to be occupied. We have no evidence against this, but I don’t think it has been proven, and I don’t think anyone has ruled out the existence of hidden symmetries that would violate this assumption. So perhaps the interesting advance here is that he has an argument that says that even if there are some hidden symmetries that cause entropy to sometimes significantly decrease, no observations could ever demonstrate that this happens? In other words, the lack of observations of entropy-decreasing processes can’t be taken as evidence against the existence of such symmetries.

  16. John Ramsden– You’re not really allowed to “zoom out” and ignore the internal states of black holes, etc. You just have to sum all the states. The far-future universe will be *simple*, but it will have a high entropy.

    On quantum mechanics, what you’re asking about is very close to the GRW model.

    weichi– The rigorous argument is “of all the states corresponding to any low-entropy macrostate, the vast majority will evolve to higher-entropy states.” But *some* will evolve to lower-entropy states; just take the time-reversal of an ordinary configuration that has evolved from a low-entropy beginning. We normally assume that all states consistent with known constraints are equally likely, so entropy is very likely to go up, but that’s just an assumption. (But it’s a much weaker assumption than the ergodic hypothesis, which is a bit of a red herring.)

  17. We observe the universe now. We can rule out the BB hypothesis by waiting roughly 1 second and noting that everywhere we look galaxies don’t suddenly fly away or become crazy. Ergo we can conclude (even without actually looking back in time) that the universe was in the past at a lower entropy state than it is now.

    Why? B/c it couldn’t be any other way for that observation to be true and for us to be here (assuming the BB is false). The absolute magnitude of the past entropy (and our current entropy) is an interesting question, but I don’t see where the paradox is. This is a perfectly good use of the anthropic principle.

    That was Feynman’s argument, and I still fail to see why people are so enthralled by this observation.

  18. Hi Sean,

    I understood the point of the final paragraph to mean that if the Universe was in an initial pure state, and evolved unitarily, then the apparent growth in entropy is the product of what measurements can, in principle, be made.

    This would require that for the entropy we measure to have grown as much as we observe, there be an extraordinarily large number of entropy decreasing events happening concurrently. All of which are not measurable. Do you believe this is possible or likely?

  19. Dear Prof. Sean Carroll,

    I’m Lorenzo Maccone, the author of the paper you write about. First of all, let me thank you for your interest in my paper. I would like to reply to the arguments you give against my paper, if I may.

    Essentially, you have two objections: 1) “The fact that we remember the direction of time in which entropy is lower, if any such direction exists, is pretty well-established”. 2) In my paper I don’t give any explanation on why the universe is in an initial low entropy state so there’s no advance with respect to the old Boltzmann’s ideas of the
    universe’s initial state having appeared as a fluctuation.

    My reply:

    1) I agree that it is well established that we remember only the past defined as time the direction where entropy decreases. However, I haven’t seen any convincing EXPLANATION of why this is the case. In fact, if I restrict myself to Boltzmann’s physics (namely classical mechanics) I cannot think of any such explanation: nothing would prevent us from remembering a humongous fluctuation in which we see an egg previously dropped on the floor coming back together and flying
    back to our hand. In classical mechanics nothing prevents some of the correlations that the egg created with the kitchen’s degree of freedom to remain untouched. Classical information can be copied at will without affecting entropy… Then, we could remember also events where entropy is decreasing!

    In quantum mechanics this is not true. If we want to restore the initial state of the egg (namely remove all entanglement between egg and kitchen that was created when the egg broke), ALL correlations between egg and kitchen must be erased. NO information on the egg’s
    breaking can remain in the environment. Any information that remains will be due to entanglement between egg and kitchen and will determine an increase in the entropy.

    In conclusion, I give an EXPLANATION (based on quantum mechanics) on why our memories refer only to the direction in time where entropy increases. I think that (although formally very straightforward) this is not so well established. I have studied the literature quite carefully, and I haven’t encountered this idea anywhere.

    2) It’s true that in my paper I don’t give an explanation on why the initial entropy of the universe is so low (I give one below). However, Boltzmann’s main problem (you point it out yourself in your blog) is not that the initial entropy of the universe is low, but rather that it’s so much lower than today’s and there’s no convincing explanation of that (certainly the fact that it derived from a fluctuation is
    unsatisfactory, as we all know).

    Now what I point out in my paper is that my explanation is fully consistent with the fact that the universe’s entropy is ALWAYS (initially and NOW) in a zero entropy pure state. The fact that it doesn’t appear so to us is just because we are subsystems of the universe and (in quantum mechanics) a subsystem can have entropy
    higher than the whole system. Let’s not forget that entropy is a subjective quantity that depends on the observer’s information (I can definitely elaborate more on that, if you’re not convinced).

    One last thing: why should the universe state be in a zero entropy pure state? Since the universe by definition cannot be entangled with any other system, then its state (to an hypothetical observer that has complete information on it) will be pure. Short of entanglement, there’s no FUNDAMENTAL reason why the state of a system cannot be pure (only the non-fundamental subjective lack of knowledge of the observer). [A system in a non-pure state is in a mixed state, namely in a state |psi_i> with probability p_i<1: this comes about either because the system is entangled with another, or because the observer
    is missing some information.]

  20. Possible dumb question: OK, suppose the universe had started in a big bang with high entropy. How would it have looked different than it did? After all, it did start with a pretty random “gas” of very high energy particles.

  21. Lorenzo originally wrote to me in email, and I responded — here is his response to my response.

    ————————————

    Sean Carroll wrote:

    Hi Lorenzo– Thanks for writing. I should first say that it would
    be much better to comment at the blog, where everyone can learn
    something (and people chiming in might even teach us something)!

    Hi Sean, thank you for your quick answer. I’m sorry if I wrote to you privately: it didn’t occur to me to write you on the blog. I have no problem in making this debate public, so I’ll be posting my previous answer on your blog. If you post your answer, we can certainly continue discussing publicly.

    Regarding your email, I’ll try to clarify my position below…

    On your first point, I don’t have too much disagreement. As I said in
    the post, everyone agrees that we remember the direction of lower
    entropy. But actually proving it is harder, and will necessarily be
    context-dependent. As far as I know your paper does this for quantum
    mechanics, which I haven’t seen before.

    Ok, this was the main message of my paper. I’m glad you have no problem with it!!!! I hope you don’t really think it’s a trivial result.

    Regarding the rest, you write:

    But the second point is the important one, and I still don’t agree,
    and in fact your comments make me more confused. The statement “the
    universe is in a zero-entropy pure state” could plausibly be true for
    the von Neumann entropy (or for the analogous Gibbs entropy in the
    classical context), but isn’t especially relevant for the question of
    the Boltzmann entropy (S = k log W) or thermodynamic entropy, and it’s
    those that are responsible for the arrow of time.

    There is a lot of literature that show that von Neumann entropy and thermodynamic entropy are the same for quantum systems. Think of all the quantum Maxwell demon literature, or of the Szilard engines: it is clearly shown that one bit of thermodynamic can be exchanged for one bit of von Neumann entropy and viceversa. Unless you restrict to classical systems, I don’t see any difference between von Neumann and thermodynamic entropy: they are indeed equivalent.

    In a nutshell, even if the universe is in a pure state, we don’t
    know what state it is, and it’s a state that is macroscopically in
    distinguishable from a very large number of states. In that sense,
    the entropy is high, and was lower in the past.

    I agree with this, but you have to distinguish between the different points of view. From OUR subjective point of view entropy is indeed higher now than in the past (this is because we are subsystems of the universe). From a SUPEROBSERVER point of view (someone who can keep track of the unitary quantum evolution of the whole universe), the entropy is CONSTANT (since unitary evolution preserves the entropy). In my last mail I gave an argument to say that it was initially zero and this means it is still zero.

    I’m not sure I follow you when you say that the state of the universe is pure but we don’t know what it is: this means that it is mixed (namely we are assigning a certain probability to each pure state). I agree that then the entropy is high (the entropy of a mixed state is always different from zero). However, that is just because of OUR ignorance (call it coarse-graining, if you prefer), whereas the superobserver would have no problem saying the state is pure.

    Please remember that since entropy is a subjective quantity, one must always specify WHO is the subject. (More on this below.)

    There are other ways of defining entropy, but the puzzle for cosmology
    is that the universe began in a low-entropy macrostate — one that was
    indistinguishable from a very tiny number of other microstates. That
    notion of entropy is not subjective, and it doesn’t depend on an
    observer’s information, it only depends on a coarse-graining. That’s
    at the heart of the arrow-of-time problem, and I don’t see how your
    paper addresses that issue.

    Ok: you say entropy is objective, I say it’s subjective. This is a fundamental difference between our views, and I’ll try to convince you of mine.

    Think of the following (classical) example (it can be easily extended to quantum mechanics). Consider two boxes of gas where the microscopic degrees of freedom are perfectly correlated, namely each gas molecule in one box is in the same position and moves exactly in the same way (same direction and same speed at each time) as a gas molecule in the other box. A person that doesn’t know this just sees two boxes at the same temperature and cannot extract ANY work from it: to him, they are at thermodynamic equilibrium. Instead, a person aware of that correlation can easily devise a system of pistons connected with pulleys that can extract some work from the two boxes! (Just put two pistons that lower a weight when they move in opposite direction and lift a weight when they move in the same direction).

    Note, however, that the subjectivity of entropy is purely academic: in ANY practical circumstance all observers will basically have the same information on different systems (since the information involved in macroscopic systems is immense), so that IN PRACTICE thermodynamic entropy is basically an objective quantity, as any engineer would swear! However, when you look more carefully into it, you see that IN THEORY the thermodynamic entropy is indeed subjective, and the above example clearly illustrates this subjectivity.

    Maybe when you say that entropy is objective you mean “for all practical purposes” (FAPP). Then I totally agree with you! Its subjectivity is something that is basically impossible to take advantage of in ANY practical situations!!

    I hope this answers to your concerns. Otherwise, I’ll be glad to further continue this debate either via email or on your blog. Thank you again for your interest!
    Bye,

    Lorenzo

Comments are closed.

Scroll to Top