Arrow of Time FAQ

The arrow of time is hot, baby. I talk about it incessantly, of course, but the buzz is growing. There was a conference in New York, and subtle pulses are chasing around the lower levels of the science-media establishment, preparatory to a full-blown explosion into popular consciousness. I’ve been ahead of my time, as usual.

So, notwithstanding the fact that I’ve disquisitioned about this a great length and considerable frequency, I thought it would be useful to collect the salient points into a single FAQ. My interest is less in pushing my own favorite answers to these questions, so much as setting out the problem that physicists and cosmologists are going to have to somehow address if they want to say they understand how the universe works. (I will stick to more or less conventional physics throughout, even if not everything I say is accepted by everyone. That’s just because they haven’t thought things through.)

Without further ado:

What is the arrow of time?

The past is different from the future. One of the most obvious features of the macroscopic world is irreversibility: heat doesn’t flow spontaneously from cold objects to hot ones, we can turn eggs into omelets but not omelets into eggs, ice cubes melt in warm water but glasses of water don’t spontaneously give rise to ice cubes. These irreversibilities are summarized by the Second Law of Thermodynamics: the entropy of a closed system will (practically) never decrease into the future.

But entropy decreases all the time; we can freeze water to make ice cubes, after all.

Not all systems are closed. The Second Law doesn’t forbid decreases in entropy in open systems, nor is it in any way incompatible with evolution or complexity or any such thing.

So what’s the big deal?

In contrast to the macroscopic universe, the microscopic laws of physics that purportedly underlie its behavior are perfectly reversible. (More rigorously, for every allowed process there exists a time-reversed process that is also allowed, obtained by switching parity and exchanging particles for antiparticles — the CPT Theorem.) The puzzle is to reconcile microscopic reversibility with macroscopic irreversibility.

And how do we reconcile them?

The observed macroscopic irreversibility is not a consequence of the fundamental laws of physics, it’s a consequence of the particular configuration in which the universe finds itself. In particular, the unusual low-entropy conditions in the very early universe, near the Big Bang. Understanding the arrow of time is a matter of understanding the origin of the universe.

Wasn’t this all figured out over a century ago?

Not exactly. In the late 19th century, Boltzmann and Gibbs figured out what entropy really is: it’s a measure of the number of individual microscopic states that are macroscopically indistinguishable. An omelet is higher entropy than an egg because there are more ways to re-arrange its atoms while keeping it indisputably an omelet, than there are for the egg. That provides half of the explanation for the Second Law: entropy tends to increase because there are more ways to be high entropy than low entropy. The other half of the question still remains: why was the entropy ever low in the first place?

Is the origin of the Second Law really cosmological? We never talked about the early universe back when I took thermodynamics.

Trust me, it is. Of course you don’t need to appeal to cosmology to use the Second Law, or even to “derive” it under some reasonable-sounding assumptions. However, those reasonable-sounding assumptions are typically not true of the real world. Using only time-symmetric laws of physics, you can’t derive time-asymmetric macroscopic behavior (as pointed out in the “reversibility objections” of Lohschmidt and Zermelo back in the time of Boltzmann and Gibbs); every trajectory is precisely as likely as its time-reverse, so there can’t be any overall preference for one direction of time over the other. The usual “derivations” of the second law, if taken at face value, could equally well be used to predict that the entropy must be higher in the past — an inevitable answer, if one has recourse only to reversible dynamics. But the entropy was lower in the past, and to understand that empirical feature of the universe we have to think about cosmology.

Does inflation explain the low entropy of the early universe?

Not by itself, no. To get inflation to start requires even lower-entropy initial conditions than those implied by the conventional Big Bang model. Inflation just makes the problem harder.

Does that mean that inflation is wrong?

Not necessarily. Inflation is an attractive mechanism for generating primordial cosmological perturbations, and provides a way to dynamically create a huge number of particles from a small region of space. The question is simply, why did inflation ever start? Rather than removing the need for a sensible theory of initial conditions, inflation makes the need even more urgent.

My theory of (brane gasses/loop quantum cosmology/ekpyrosis/Euclidean quantum gravity) provides a very natural and attractive initial condition for the universe. The arrow of time just pops out as a bonus.

I doubt it. We human beings are terrible temporal chauvinists — it’s very hard for us not to treat “initial” conditions differently than “final” conditions. But if the laws of physics are truly reversible, these should be on exactly the same footing — a requirement that philosopher Huw Price has dubbed the Double Standard Principle. If a set of initial conditions is purportedly “natural,” the final conditions should be equally natural. Any theory in which the far past is dramatically different from the far future is violating this principle in one way or another. In “bouncing” cosmologies, the past and future can be similar, but there tends to be a special point in the middle where the entropy is inexplicably low.

What is the entropy of the universe?

We’re not precisely sure. We do not understand quantum gravity well enough to write down a general formula for the entropy of a self-gravitating state. On the other hand, we can do well enough. In the early universe, when it was just a homogenous plasma, the entropy was essentially the number of particles — within our current cosmological horizon, that’s about 1088. Once black holes form, they tend to dominate; a single supermassive black hole, such as the one at the center of our galaxy, has an entropy of order 1090, according to Hawking’s famous formula. If you took all of the matter in our observable universe and made one big black hole, the entropy would be about 10120. The entropy of the universe might seem big, but it’s nowhere near as big as it could be.

If you don’t understand entropy that well, how can you even talk about the arrow of time?

We don’t need a rigorous formula to understand that there is a problem, and possibly even to solve it. One thing is for sure about entropy: low-entropy states tend to evolve into higher-entropy ones, not the other way around. So if state A naturally evolves into state B nearly all of the time, but almost never the other way around, it’s safe to say that the entropy of B is higher than the entropy of A.

Are black holes the highest-entropy states that exist?

No. Remember that black holes give off Hawking radiation, and thus evaporate; according to the principle just elucidated, the entropy of the thin gruel of radiation into which the black hole evolves must have a higher entropy. This is, in fact, borne out by explicit calculation.

So what does a high-entropy state look like?

Empty space. In a theory like general relativity, where energy and particle number and volume are not conserved, we can always expand space to give rise to more phase space for matter particles, thus allowing the entropy to increase. Note that our actual universe is evolving (under the influence of the cosmological constant) to an increasingly cold, empty state — exactly as we should expect if such a state were high entropy. The real cosmological puzzle, then, is why our universe ever found itself with so many particles packed into such a tiny volume.

Could the universe just be a statistical fluctuation?

No. This was a suggestion of Bolzmann’s and Schuetz’s, but it doesn’t work in the real world. The idea is that, since the tendency of entropy to increase is statistical rather than absolute, starting from a state of maximal entropy we would (given world enough and time) witness downward fluctuations into lower-entropy states. That’s true, but large fluctuations are much less frequent than small fluctuations, and our universe would have to be an enormously large fluctuation. There is no reason, anthropic or otherwise, for the entropy to be as low as it is; we should be much closer to thermal equilibrium if this model were correct. The reductio ad absurdum of this argument leads us to Boltzmann Brains — random brain-sized fluctuations that stick around just long enough to perceive their own existence before dissolving back into the chaos.

Don’t the weak interactions violate time-reversal invariance?

Not exactly; more precisely, it depends on definitions, and the relevant fact is that the weak interactions have nothing to do with the arrow of time. They are not invariant under the T (time reversal) operation of quantum field theory, as has been experimentally verified in the decay of the neutral kaon. (The experiments found CP violation, which by the CPT theorem implies T violation.) But as far as thermodynamics is concerned, it’s CPT invariance that matters, not T invariance. For every solution to the equations of motion, there is exactly one time-reversed solution — it just happens to also involve a parity inversion and an exchange of particles with antiparticles. CP violation cannot explain the Second Law of Thermodynamics.

Doesn’t the collapse of the wavefunction in quantum mechanics violate time-reversal invariance?

It certainly appears to, but whether it “really” does depends (sadly) on one’s interpretation of quantum mechanics. If you believe something like the Copenhagen interpretation, then yes, there really is a stochastic and irreversible process of wavefunction collapse. Once again, however, it is unclear how this could help explain the arrow of time — whether or not wavefunctions collapse, we are left without an explanation of why the early universe had such a small entropy. If you believe in something like the Many-Worlds interpretation, then the evolution of the wavefunction is completely unitary and reversible; it just appears to be irreversible, since we don’t have access to the entire wavefunction. Rather, we belong in some particular semiclassical history, separated out from other histories by the process of decoherence. In that case, the fact that wavefunctions appear to collapse in one direction of time but not the other is not an explanation for the arrow of time, but in fact a consequence of it. The low-entropy early universe was in something close to a pure state, which enabled countless “branchings” as it evolved into the future.

This sounds like a hard problem. Is there any way the arrow of time can be explained dynamically?

I can think of two ways. One is to impose a boundary condition that enforces one end of time to be low-entropy, whether by fiat or via some higher principle; this is the strategy of Roger Penrose’s Weyl Curvature Hypothesis, and arguably that of most flavors of quantum cosmology. The other is to show that reversibilty is violated spontaneously — even if the laws of physics are time-reversal invariant, the relevant solutions to those laws might not be. However, if there exists a maximal entropy (thermal equilibrium) state, and the universe is eternal, it’s hard to see why we aren’t in such an equilibrium state — and that would be static, not constantly evolving. This is why I personally believe that there is no such equilibrium state, and that the universe evolves because it can always evolve. The trick of course, is to implement such a strategy in a well-founded theoretical framework, one in which the particular way in which the universe evolves is by creating regions of post-Big-Bang spacetime such as the one in which we find ourselves.

Why do we remember the past, but not the future?

Because of the arrow of time.

Why do we conceptualize the world in terms of cause and effect?

Because of the arrow of time.

Why is the universe hospitable to information-gathering-and-processing complex systems such as ourselves, capable of evolution and self-awareness and the ability to fall in love?

Because of the arrow of time.

Why do you work on this crazy stuff with no practical application?

I think it’s important to figure out a consistent story of how the universe works. Or, if not actually important, at least fun.

161 Comments

161 thoughts on “Arrow of Time FAQ”

  1. Folks, please don’t repost long comments from other threads.

    Bee, I’ll fix the venue of the conference.

    Low Math, the emergence of time from quantum gravity is certainly an interesting problem. It’s not clear whether it has important implications for the evolution of entropy — maybe, maybe not, one would have to make some explicit construction. It’s hard to see how, as it remains true that the “early” universe is in a very special state, nowhere near equilibrium, and rapidly evolves into something else.

  2. Pingback: Seed's Daily Zeitgeist: 12/4/2007 - General Science

  3. I have no doubt that the question “Why is the entropy
    of the universe what it is?” is interesting. But I
    don’t think it is correct to suggest that there is
    a deep mystery behind the second law.

    The Zermelo/Loschmidt type objections (“How can you
    get the T-violating Boltzmann equation from T-reversal
    invariant dynamics?”) was already correctly answered
    by Bolzmann himself. The Boltzmann equation involves
    a suitable limiting process (N->infty etc,), and is a
    statitical statement, correct for “almost all” initial
    conditions.

    You can do a computer experiment, pick initial conditions
    for N billiard ball and evolve forward in time. You will
    find that the entropy increases with time. Then you stop
    the computer, reverse all momenta, and evolve forward.
    Now you find that entropy decreases. Why? The T-reversed
    initial conditions are very special, they involve subtle
    correlations that “remember” the low entropy initial
    state.

  4. thomas, I’m afraid that’s just not right, or at least dramatically misleading. For “almost all” initial conditions, you are in thermal equilibrium, and the entropy doesn’t change at all. The number of initial conditions for which the entropy increases is exactly the same as the number for which it decreases. In fact they are in one-to-one correspondence, given by CPT conjugation.

    As you say, the “initial” conditions you get by starting with a low-entropy state, evolving it to high entropy, and taking the T-inverse are indeed very special. In fact, they are precisely as special as the low-entropy conditions you started with in the first place.

    The way you can get the T-violating Second Law from T-invariant dynamics is to have T-violating boundary conditions, in particular a low-entropy state near the Big Bang. We still don’t know why the universe is like that.

  5. I don’t understand this: “our universe would have to be an enormously large fluctuation”. I thought you guys were working to simplify everything to an equation or two, or a concept or so, something truly elemental. So why must the appearance of that fundamental thing, require an enormously large fluctuation? Why wouldn’t it require just an everyday (so to speak) burp? I know what I want to ask, but maybe didn’t succeed. Pardon me in advance.

  6. Nice post! One question: if causality were assumed to be a fundamental law of nature, would the arrow of time still be a problem? I’m thinking of things like Erik Zeeman’s paper “Causality Implies the Lorentz Group” and Ambjørn, Jurkiewicz, and Loll’s causal dynamical triangulation approach, both of which seem to ride pretty far on little more than the assumption of causality.

  7. PS: The connection to causal sets (Sorkin and collaborators) is also worth mentioning. I believe the essential content of Zeeman’s result plays a central role in causal set theory.

  8. “Is the origin of the Second Law really cosmological? We never talked about the early universe back when I took thermodynamics.

    Trust me, it is…”

    I do trust you. In fact, I’m completely amazed that there are people who doubt this. But I think that this point is a major reason for the widespread failure to see how important all this is. Perhaps you could expand on this part of the FAQ? What is the reason behind this disastrous misunderstanding of the Second Law?

  9. thomas wrote:
    You can do a computer experiment, pick initial conditions
    for N billiard ball and evolve forward in time. You will
    find that the entropy increases with time. Then you stop
    the computer, reverse all momenta, and evolve forward.
    Now you find that entropy decreases. Why? The T-reversed
    initial conditions are very special, they involve subtle
    correlations that “remember” the low entropy initial
    state.

    If your simulation is based on reversible classical laws which satisfy Liouville’s Theorem (which basically says that the dynamics conserve volume in phase space over time), then the T-reversed initial conditions are no more or less special than the original initial conditions which caused the entropy to increase. To put it another way, if you picked your initial conditions randomly using a uniform probability distribution on the entire phase space, then the the probability that your initial condition would have some lower entropy S and then evolve to a state with a higher entropy S’ would be precisely equal to the probability that your initial condition would have the higher entropy S’ and evolve to a lower entropy S in the same amount of time.

  10. Thanks, Chris W.! The review paper you linked looks really interesting; I’m definitely saving it for future reference!

  11. Sean,

    I’ve been wondering about this for a little while, and I’m really confused as to why you state that it makes no sense for the universe to be a quantum fluctuation out of equilibrium. Now, granted, I certainly have not thought about this as much as you have, but I have yet to understand why. Here is my really basic picture:

    Consider two different systems. One is composed of many particles, the other few. The system composed of many particles will necessarily experience only minuscule departures from equilibrium, while the system of few particles will experience much larger departures. It’s not really unexpected at all to find a tiny region of the universe where the entropy is very small at any given time. So if a random fluctuation our of equilibrium is to produce a region of the universe like our own, then it makes the most sense that such a random fluctuation will be a small-scale fluctuation: it cannot require a large fluctuation out of equilibrium over a large volume. But from this small volume, a massive volume must be generated.

    This seems, at least on the surface, to perfectly describe inflation: inflation can be started when a particle field with the right properties obtains a nearly uniform value over a minuscule region of space, and from this minuscule region of space, a massively large region can be generated, with massively higher entropy than could have been in the original patch if it were in equilibrium before inflation began.

    But, unfortunately, I don’t see that this picture says anything at all about the arrow of time.

  12. Sean,

    Inflationary models say that our observable universe was in thermal equilibrium before inflation, which is why the universe is so isotropic and homogeneous. Thermal equilibrium is the highest entropy state given the constraints on the system. In this case the constraints include the universe’s size, which was very small before inflation. Is the question of why the universe started out in a low entropy state equivalent to the question of why the universe started out so small? If we could explain the initial smallness, would we be done?

  13. There is a preferred basis for calculating thermodynamic entropy, I suspect, and it’s the only one I’ve ever heard used – the eigenbasis of the Hamiltonian, or states of definite energy. What makes this basis special? Well, I can think of a couple of hand wavy arguments for what would make this basis special. In no particular order: the study of thermodynamics centers on systems in some kind of equilibrium, and in quantum mechanics that means the eigenstates of the Hamiltonian, (for example the condition for zero entropy change during a process is that the system always be infinitesimally close to equilibrium so they are obviously related concepts); the other observables of which the entropy is a function (like volume) are usually not considered as quantum observables but as classical ones, even if the system is exchanging them with a bath (like when a weight sits atop a movable piston); and because systems that are in “thermal contact” are normally considered to be exchanging energy/entropy. Defining thermodynamic entropy this way also has the advantage that, at least for bound states, you’re working with a discreet basis so you don’t have oddities like negative information, even if it pops up only in theory.

    That is, as far as I can tell, the only thing that distinguishes thermodynamic entropy from Shannon style information entropy.

  14. Jason, what you describe is something like what Jennifer Chen and I proposed. A fluctuation leading to inflation is a promising way to get something like our universe. However, it can’t be in equilibrium. If it were, every process would happen just as frequently as its time-reversal, and low-entropy fluctuations are vastly preferred.

    Gavin, the universe was certainly not in thermal equilibrium before inflation. If it were, it wouldn’t evolve into something else. At best, the matter degrees of freedom were close to equilibrium, but that’s not very relevant when gravity is so important.

    Note also that the small size of the universe is not an a priori constraint, it’s part of what needs to be explained. Why was the universe so small?

  15. Jasper vH asked:

    I was wondering if you might know if there is a quantum version of the Fluctuation Theorem.
    I know that in classical systems it can quantify (under certain assumptions) the probability of the entropy flowing opposite to the direction that is stated by the second law.

    I have skimmed the thread, and didn’t see any answer to this question. My apologies if I missed something.

    The answer (to the best of my knowledge anyway) is that this is an open question in current research in quantum thermodynamics and statistical physics. I know of at least one research group working on finding the quantum-mechanical corrections to the fluctuation theorem.

  16. Hi Sean,

    I really enjoyed the FAQ, so thanks for that. I was hoping that you might be able to answer a quick question for me:

    I’m a little concerned about how entropy is defined here. While the entropy of a pure state is 0, and for a mixed state non-zero, I’m not entirely convinced that’s a good measure for what we observe. In order to calculate the entropy of a state, we need information about the full state. If our measurements of entropy are in some sense local, then entanglement in the state of the universe will lead to a non-zero entropy being measured (despite the fact that the actual entropy is 0). Over sufficiently long time scales you could still see periodic behaviour, but you would certainly see extended periods when the entropy grows from 0.

    So I was wondering, how do you overcome the difference between some kind of local observation of ‘entropy’ and the actual entropy of the universe in this work?

    Thanks!

  17. Jason, what you describe is something like what Jennifer Chen and I proposed. A fluctuation leading to inflation is a promising way to get something like our universe. However, it can’t be in equilibrium. If it were, every process would happen just as frequently as its time-reversal, and low-entropy fluctuations are vastly preferred.

    Okay, I went and found the two papers you two co-authored that are referenced on the arxiv and skimmed them. So it sounds like you are saying something very similar to my vague idea. But I’m still not understanding something. In gr-qc/0505037 you state:

    The entropy of the proto-inflationary patch, then, is fantastically smaller than the en-tropy of our current Hubble volume, or even than that of our comoving volume at early times before there were any black holes. This is in perfect accord with the Second Law of Thermo-dynamics, since the entropy is increasing. But it is hard to reconcile with the idea that we should find an appropriate proto-inflationary patch within the randomly fluctuating early universe. If we are randomly choosing conditions, it is much easier to choose high-entropy conditions than low-entropy ones; hence, it would much more likely to simply find a patch that looks like our universe today, than to find one that was about to begin inflating.

    This point is somewhat counterintuitive, and worth emphasizing. Despite their vast differences in size, energy, and number of particles, the proto-inflationary patch and our current universe are two configurations of the same system, since one can evolve into the other. There are many more ways for that system to look like our current universe than to be in a proto-inflationary configuration.

    Later you invoke fluctuation out of de Sitter space to fix the problem, so that the low entropy density of de Sitter space means that the small, low-entropy fluctuation is favored over the large, high-entropy fluctuation.

    What I don’t understand is why you need to resort to the properties of what this region is fluctuating out of to make it depend upon the size of the eventual region? Intuitively I would expect that low-volume fluctuations would be pretty strongly preferred no matter the previous state.

  18. I am probably well out of date here but it was my understanding that it was found in the 1960s that CP was violated in weak interactions. If this is the case, then T must also be violated in order for CPT to be conserved.

  19. Sean,

    Folks, please don’t repost long comments from other threads.

    Sorry about that. I did it to clarify the question I asked in #16. Whether time is caused by motion, or motion is caused by time. I realize the standard assumption is that motion is an effect of the dimension of time, but the only explanation I can get from anyone is Jason saying that’s how the equations are written.

    It seems the choice is between time as dimension being real and change being an illusion, or change being real and the dimension of time being an illusion. I realize I don’t have much in the way of complex mathematics to support my position, but out here in the reality I live in, change is real and the dimension of time is a chain of narrative to be distilled out of the general chaos, so it seems to me that change causes time. Instead of physical reality traveling along this dimension from past to future, it creates it and events go from future potential to past circumstance.

  20. ObsessiveMathsFreak

    Are these really the kinds of questions physicists should concern themselves with. The arrow of time sounds like a distinctly meta physical argument. Shouldn’t science concern itself with observables?

  21. Pingback: Less than a Week Left « blueollie

  22. Is it possible to have a more or less T-symmetrical situation about a minimum of the entropy? I.e. could that piece of the universe that underwent inflation if you run it forward also undergo inflation if you run it backward?

    In the backward running universe the observers will, of course, experience time evolution in the opposite global direction as we do. So, you just have two sectors glued together by the low entropy state. Observers in both sectors will point to the same low entropy patch as the origin of their universe.

  23. You can turn an omelet into an egg if you feed it to a chicken. Isn’t the concept of a closed system artificial? Unless the universe is a closed system. A cup falling off the counter is not a closed system. In open systems there are both increases and decreases in entropy. When asking why the underlying laws of physics can be run forward and backward in time, but not macroscopic behavior I am not sure what you are referring to. Some actions at the macroscopic level can be computed forward and backward in time without difficulty, though we don’t observe them that way. But, at the quantum level, we probably do observe them going both ways? Are you comparing observables to observables, or computations to computations?

    That interactions at the quantum scale can be run forward and backward in time without any problem, indicates that relationships between quantum entities are outside time as time is experienced at the macroscopic level. That conclusion also applies to other activities at the quantum level, such as entanglement. So, to me that is the question, why quantum relationships escape the arrow of time constraints the rest of us have. Using cosmology and initial conditions isn’t enough of an explanation because that was also the initial conditions for the quantum entities.

Comments are closed.

Scroll to Top