Arrow of Time FAQ

The arrow of time is hot, baby. I talk about it incessantly, of course, but the buzz is growing. There was a conference in New York, and subtle pulses are chasing around the lower levels of the science-media establishment, preparatory to a full-blown explosion into popular consciousness. I’ve been ahead of my time, as usual.

So, notwithstanding the fact that I’ve disquisitioned about this a great length and considerable frequency, I thought it would be useful to collect the salient points into a single FAQ. My interest is less in pushing my own favorite answers to these questions, so much as setting out the problem that physicists and cosmologists are going to have to somehow address if they want to say they understand how the universe works. (I will stick to more or less conventional physics throughout, even if not everything I say is accepted by everyone. That’s just because they haven’t thought things through.)

Without further ado:

What is the arrow of time?

The past is different from the future. One of the most obvious features of the macroscopic world is irreversibility: heat doesn’t flow spontaneously from cold objects to hot ones, we can turn eggs into omelets but not omelets into eggs, ice cubes melt in warm water but glasses of water don’t spontaneously give rise to ice cubes. These irreversibilities are summarized by the Second Law of Thermodynamics: the entropy of a closed system will (practically) never decrease into the future.

But entropy decreases all the time; we can freeze water to make ice cubes, after all.

Not all systems are closed. The Second Law doesn’t forbid decreases in entropy in open systems, nor is it in any way incompatible with evolution or complexity or any such thing.

So what’s the big deal?

In contrast to the macroscopic universe, the microscopic laws of physics that purportedly underlie its behavior are perfectly reversible. (More rigorously, for every allowed process there exists a time-reversed process that is also allowed, obtained by switching parity and exchanging particles for antiparticles — the CPT Theorem.) The puzzle is to reconcile microscopic reversibility with macroscopic irreversibility.

And how do we reconcile them?

The observed macroscopic irreversibility is not a consequence of the fundamental laws of physics, it’s a consequence of the particular configuration in which the universe finds itself. In particular, the unusual low-entropy conditions in the very early universe, near the Big Bang. Understanding the arrow of time is a matter of understanding the origin of the universe.

Wasn’t this all figured out over a century ago?

Not exactly. In the late 19th century, Boltzmann and Gibbs figured out what entropy really is: it’s a measure of the number of individual microscopic states that are macroscopically indistinguishable. An omelet is higher entropy than an egg because there are more ways to re-arrange its atoms while keeping it indisputably an omelet, than there are for the egg. That provides half of the explanation for the Second Law: entropy tends to increase because there are more ways to be high entropy than low entropy. The other half of the question still remains: why was the entropy ever low in the first place?

Is the origin of the Second Law really cosmological? We never talked about the early universe back when I took thermodynamics.

Trust me, it is. Of course you don’t need to appeal to cosmology to use the Second Law, or even to “derive” it under some reasonable-sounding assumptions. However, those reasonable-sounding assumptions are typically not true of the real world. Using only time-symmetric laws of physics, you can’t derive time-asymmetric macroscopic behavior (as pointed out in the “reversibility objections” of Lohschmidt and Zermelo back in the time of Boltzmann and Gibbs); every trajectory is precisely as likely as its time-reverse, so there can’t be any overall preference for one direction of time over the other. The usual “derivations” of the second law, if taken at face value, could equally well be used to predict that the entropy must be higher in the past — an inevitable answer, if one has recourse only to reversible dynamics. But the entropy was lower in the past, and to understand that empirical feature of the universe we have to think about cosmology.

Does inflation explain the low entropy of the early universe?

Not by itself, no. To get inflation to start requires even lower-entropy initial conditions than those implied by the conventional Big Bang model. Inflation just makes the problem harder.

Does that mean that inflation is wrong?

Not necessarily. Inflation is an attractive mechanism for generating primordial cosmological perturbations, and provides a way to dynamically create a huge number of particles from a small region of space. The question is simply, why did inflation ever start? Rather than removing the need for a sensible theory of initial conditions, inflation makes the need even more urgent.

My theory of (brane gasses/loop quantum cosmology/ekpyrosis/Euclidean quantum gravity) provides a very natural and attractive initial condition for the universe. The arrow of time just pops out as a bonus.

I doubt it. We human beings are terrible temporal chauvinists — it’s very hard for us not to treat “initial” conditions differently than “final” conditions. But if the laws of physics are truly reversible, these should be on exactly the same footing — a requirement that philosopher Huw Price has dubbed the Double Standard Principle. If a set of initial conditions is purportedly “natural,” the final conditions should be equally natural. Any theory in which the far past is dramatically different from the far future is violating this principle in one way or another. In “bouncing” cosmologies, the past and future can be similar, but there tends to be a special point in the middle where the entropy is inexplicably low.

What is the entropy of the universe?

We’re not precisely sure. We do not understand quantum gravity well enough to write down a general formula for the entropy of a self-gravitating state. On the other hand, we can do well enough. In the early universe, when it was just a homogenous plasma, the entropy was essentially the number of particles — within our current cosmological horizon, that’s about 1088. Once black holes form, they tend to dominate; a single supermassive black hole, such as the one at the center of our galaxy, has an entropy of order 1090, according to Hawking’s famous formula. If you took all of the matter in our observable universe and made one big black hole, the entropy would be about 10120. The entropy of the universe might seem big, but it’s nowhere near as big as it could be.

If you don’t understand entropy that well, how can you even talk about the arrow of time?

We don’t need a rigorous formula to understand that there is a problem, and possibly even to solve it. One thing is for sure about entropy: low-entropy states tend to evolve into higher-entropy ones, not the other way around. So if state A naturally evolves into state B nearly all of the time, but almost never the other way around, it’s safe to say that the entropy of B is higher than the entropy of A.

Are black holes the highest-entropy states that exist?

No. Remember that black holes give off Hawking radiation, and thus evaporate; according to the principle just elucidated, the entropy of the thin gruel of radiation into which the black hole evolves must have a higher entropy. This is, in fact, borne out by explicit calculation.

So what does a high-entropy state look like?

Empty space. In a theory like general relativity, where energy and particle number and volume are not conserved, we can always expand space to give rise to more phase space for matter particles, thus allowing the entropy to increase. Note that our actual universe is evolving (under the influence of the cosmological constant) to an increasingly cold, empty state — exactly as we should expect if such a state were high entropy. The real cosmological puzzle, then, is why our universe ever found itself with so many particles packed into such a tiny volume.

Could the universe just be a statistical fluctuation?

No. This was a suggestion of Bolzmann’s and Schuetz’s, but it doesn’t work in the real world. The idea is that, since the tendency of entropy to increase is statistical rather than absolute, starting from a state of maximal entropy we would (given world enough and time) witness downward fluctuations into lower-entropy states. That’s true, but large fluctuations are much less frequent than small fluctuations, and our universe would have to be an enormously large fluctuation. There is no reason, anthropic or otherwise, for the entropy to be as low as it is; we should be much closer to thermal equilibrium if this model were correct. The reductio ad absurdum of this argument leads us to Boltzmann Brains — random brain-sized fluctuations that stick around just long enough to perceive their own existence before dissolving back into the chaos.

Don’t the weak interactions violate time-reversal invariance?

Not exactly; more precisely, it depends on definitions, and the relevant fact is that the weak interactions have nothing to do with the arrow of time. They are not invariant under the T (time reversal) operation of quantum field theory, as has been experimentally verified in the decay of the neutral kaon. (The experiments found CP violation, which by the CPT theorem implies T violation.) But as far as thermodynamics is concerned, it’s CPT invariance that matters, not T invariance. For every solution to the equations of motion, there is exactly one time-reversed solution — it just happens to also involve a parity inversion and an exchange of particles with antiparticles. CP violation cannot explain the Second Law of Thermodynamics.

Doesn’t the collapse of the wavefunction in quantum mechanics violate time-reversal invariance?

It certainly appears to, but whether it “really” does depends (sadly) on one’s interpretation of quantum mechanics. If you believe something like the Copenhagen interpretation, then yes, there really is a stochastic and irreversible process of wavefunction collapse. Once again, however, it is unclear how this could help explain the arrow of time — whether or not wavefunctions collapse, we are left without an explanation of why the early universe had such a small entropy. If you believe in something like the Many-Worlds interpretation, then the evolution of the wavefunction is completely unitary and reversible; it just appears to be irreversible, since we don’t have access to the entire wavefunction. Rather, we belong in some particular semiclassical history, separated out from other histories by the process of decoherence. In that case, the fact that wavefunctions appear to collapse in one direction of time but not the other is not an explanation for the arrow of time, but in fact a consequence of it. The low-entropy early universe was in something close to a pure state, which enabled countless “branchings” as it evolved into the future.

This sounds like a hard problem. Is there any way the arrow of time can be explained dynamically?

I can think of two ways. One is to impose a boundary condition that enforces one end of time to be low-entropy, whether by fiat or via some higher principle; this is the strategy of Roger Penrose’s Weyl Curvature Hypothesis, and arguably that of most flavors of quantum cosmology. The other is to show that reversibilty is violated spontaneously — even if the laws of physics are time-reversal invariant, the relevant solutions to those laws might not be. However, if there exists a maximal entropy (thermal equilibrium) state, and the universe is eternal, it’s hard to see why we aren’t in such an equilibrium state — and that would be static, not constantly evolving. This is why I personally believe that there is no such equilibrium state, and that the universe evolves because it can always evolve. The trick of course, is to implement such a strategy in a well-founded theoretical framework, one in which the particular way in which the universe evolves is by creating regions of post-Big-Bang spacetime such as the one in which we find ourselves.

Why do we remember the past, but not the future?

Because of the arrow of time.

Why do we conceptualize the world in terms of cause and effect?

Because of the arrow of time.

Why is the universe hospitable to information-gathering-and-processing complex systems such as ourselves, capable of evolution and self-awareness and the ability to fall in love?

Because of the arrow of time.

Why do you work on this crazy stuff with no practical application?

I think it’s important to figure out a consistent story of how the universe works. Or, if not actually important, at least fun.

161 Comments

161 thoughts on “Arrow of Time FAQ”

  1. It’s 9:06 am and Internet Explorer says you published this at 9:13 am, violating the Arrow of Time.

    Before causality re-asserts itself, I’ve got a question: does a wave function of one particle have an entropy? i.e., if a particle is in a superposition of states is there an entropy associated with it?

    And if it does, and the entropy gets eliminated when the particle is measured, where does it go?

  2. Thank you for that… I have been pondering over Entropy and the Second Law in a very layman role (my engineering degree notwithstanding) for a very long time now.

    You helped clear up several niggling doubts I had about the fundamental meaning and the macroscopic implications of the second law.

  3. I still don’t understand why CP violation (and hence T violation) can’t play a role in this. You present the problem as a problem with time reversal, then you mysteriously say that CPT (and not just T) is what matters. Why?

  4. Very nice post, indeed!

    I also have the same question as Anonymous: why does thermodynamics care about CPT, rather than T alone?

    To andy.s:

    “does a wave function of one particle have an entropy? i.e., if a particle is in a superposition of states is there an entropy associated with it?”

    Yes it does. For pure states (which include the coherent superpositions) the entropy is zero, while for mixed states “rho”, the (von Neuman) entropy S is given by:

    S(rho) = -tr[rho ln(rho)]

    The von Neuman entropy is a measure of our ignorance of the state rho. For pure states there exists a basis in which a measurement always gives the same outcome, which is why in this case S=0.

  5. Hi. Good post Sean!
    You write:

    1. The usual “derivations” of the second law, if taken at face value, could equally well be used to predict that the entropy must be higher in the past — an inevitable answer, if one has recourse only to reversible dynamics. But the entropy was lower in the past.

    I can understand why entropy was actually lower in the past.
    But I don’t get how the second law “if taken at face value” implies the opposite-that entropy should be higher in the past?

    2. Also, what role does gravity play in the arrow of time?
    Does gravity decrease entropy into the future due to gravitational attraction/clumping of matter?

    Thank you

  6. Pingback: Arrow of time and origin of universe « Entertaining Research

  7. About T-violation and the arrow of time: the simple answer is that the weak interactions are perfectly unitary, even if they are not T-invariant. They don’t affect the entropy in any way, so they don’t help with the arrow of time.

    A bit more carefully: if you did want to explain the arrow of time using microscopic dynamics, you would have to argue that there exist more solutions to the equations of motion in which entropy grows than solutions in which entropy decreases. But CPT invariance is enough to guarantee that that’s not true. For any trajectory (or ensemble of trajectories, or evolution of a distribution function) in which the entropy changes in one way, there is another trajectory (or set…) in which the entropy changes in precisely the opposite way: the CPT conjugate. Such laws of physics do not in and of themselves pick out what we think of as the arrow of time.

    People talk about the “arrow of time of the weak interactions,” but ask yourself: in which direction does it point? There just isn’t any direct relationship to entropy.

  8. Khurram, it’s not the Second Law that predicts the entropy was higher in the past — it’s the logic underlying attempts to derive the Second Law without explicit reference to a low-entropy boundary condition in the past.

  9. I was wondering if you might know if there is a quantum version of the Fluctuation Theorem.
    I know that in classical systems it can quantify (under certain assumptions) the probability of the entropy flowing opposite to the direction that is stated by the second law.

    Most Fluctuation Theorem articles are written by D.J. Evans if you want to look it up.

  10. Re: andy.s, PK:

    S(rho) = -tr[rho ln(rho)]

    (give or take a factor of k_B…)

    To andy.s – this is surprising if you think of a wavefunction represented in a basis – |psi> has lots of coefficients, which can take on many different values: so why isn’t there in entropy in the coefficents? The resolution is that, the basis representation is not meaningful by itself – you can easily change it without changing the system, by rotating the basis by U. In the absence of any preferred basis, there is no meaning to rotating the state, or equivalently rotating the observer; the thing stays the same no matter how you look at it.

    (In the Copenhagen picture) measurement breaks this symmetry; a measuring process involves a preferred basis, the eigenbasis of the observable. (Say you’re measuring spin component – you *choose* which is way is spin-up, and this breaks a symmetry of there being no preferred direction). So in that context, the coefficents *relative to this one basis* suddenly become meaningful, informationful. If there is a superposition, then the measurement is non-deterministic, so we’ve got now got *probabilities* to work with. In the density matrix formalism PK brough up this is the mixed state ? (rho) which goes in the von Neumann formula. It’s really nothing more than Shannon or Gibbs entropies applied to possible measurement outcomes.

    So yes, the coefficents do contain entropy – when you measure them!

    Thomas S.

  11. The problem is with the CPT theorem, or rather its assumption, that translation along time like geodesics can be represented by a real parameterizaton of a group of complex unitary operators of the form exp(iHt). Only in a limited set of manifolds can time translation be represented by this unitary group.

    For more general sets of manifolds (curved space time) the representation of time translation is not a simple parameterized unitary group, in fact it neither has a simple parameterization nor does it contain unitary operators at all, however each these groups are dense on an characterstic group of unitary operators, corresponding to the fundamental Hamiltonian operator of the manifold, which can then be mapped to the stress-energy of the manifold.

    In more general manifolds the bijective nature of translations along time like geodesics is lost, so that the operators are no longer invertible, but still have unit Banach norm. This is only possible for linear operators on infinite Hilbert spaces.

  12. …if you think of a wavefunction represented in a basis – |psi> has lots of coefficients, which can take on many different values: so why isn’t there in entropy in the coefficents?

    Yeah, that’s what I was wondering.

    The resolution is that, the basis representation is not meaningful by itself – you can easily change it without changing the system, by rotating the basis by U. In

    If I measure a spin in the z-basis, it can be up or down, but in the x-basis it’s
    (up + down)/sqrt(2)

    In the density matrix formalism PK brough up this is the mixed state ? (rho) which goes in the von Neumann formula. It’s really nothing more than Shannon or Gibbs entropies applied to possible measurement outcomes.

    So yes, the coefficents do contain entropy – when you measure them!

    OK, I need to read up more on density matrices to sort that out. My question did actually relate to the topic of this thread, but I need to learn a bit more about the subject to ask it properly.

  13. This still doesn’t answer a question I keep raising about time; Does time cause change/motion, or does change/motion cause time? If it is the former, then we are traveling along this dimension of time from past events to future ones, but if it is the later, then as change/motion adjusts circumstances, former ones are replaced by current ones, so it is the illusion of dimension going from future potential to past circumstance. Much as tomorrow becomes yesterday, as the earth rotates relative to the sun.

  14. Hi Sean. I find this arrow of time stuff really fascinating — kudos on the excellent FAQ. A few questions:

    (1) Weren’t you the one who talked in a previous post about how not everything has to “happen for a reason” when we’re talking about the universe as a whole (even though within the universe we have a notion of cause and effect)? So why would we think there would be a reason for the universe to start in a low entropy state, as opposed to that being “just the way it is”?

    (2) If you come up with an idea for why the universe starts in a low entropy state (and I know you’ve suggested some ideas in this vein), is there any hope for an empirical test? What could such a test possibly look like? Or are we just hoping to find a theory that’s so “elegant” that the scientific community accepts it without experimental proof?

    (3) One thing that’s always confused me about entropy: As you say, it’s the number of microstates that constitute a macrostate (or really the log of that number). But isn’t the definition of a macrostate somewhat dependent on us? I mean, if we developed a new kind of experiment that could distinguish two previously indistinguishable microstates, they wouldn’t be the same macrostate anymore, right? In that case, does the second law mean entropy will increase regardless of how we define the macrostates (as long as we keep those definitions consistent)– or only for some preferred assignment of microstates to macrostates?

  15. A superb and very useful post, much appreciated. I particularly applaud your nuanced explanation of questions for which different interpretations offer different answers. That’s always a good thing.

    Penrose…no matter where I turn in learning about physics, that guy is there, with some brilliant, difficult idea that’s radically different from the way other experts see it. An interesting character, to say the least.

  16. TimG, let’s take a stab at your questions:

    1) It’s certainly possible that a low-entropy initial condition is just the way the universe is, and I’m careful to emphasize that possibility in talks and papers. But it’s also possible that it’s a clue to a deeper explanation. The fact that some things “just are” doesn’t necessarily mean that it’s always clear which things those are.

    2) At the moment I don’t know of any experimental tests. But my aim is a bit lower: I just would like to have at least one model that is consistent both with what we know about the laws of physics and with what we observe about the universe. If we had more than one such model, experiment would be necessary to decide between them.

    3) How we do the coarse-graining to define which microstates are macroscopically equivalent is a classic question. My personal belief is that the choices we make to divide the space of states up into “equivalent” subspaces are not arbitrary, but are actually determined by features of the laws of physics. (For example, the fact that interactions are local in space.) The project of actually turning that belief into a set of rigorous results is far from complete, as far as I know.

  17. As I argued before, “time” as we experience it is not mathematically modelable anyway. Sure, there’s diagrams plotting things as a function of time, world lines etc, but let’s get real (heh): that’s like a tinkertoy construction sitting on a table, not like something we “move” along, with a “present” that we live in. (Pls. don’t blithly reach for “illusion” talk, OK? Yeah, maybe, but don’t make it so easy for yourself…)

    One weird thing about time-reversability: Suppose I could intervene in a time-reversed world W’. I could deflect a bullet that (to me) had popped out of a tree it “hit”, and then – instead of reentering the gun barrel, it would smack into maybe some other tree that it shouldn’t be “coming out of” from the point of view of W’. That would be weird, and it would ruin the whole “past” of W’. Well, we think our own past has already happened, so what if (if time flow really is relative) some Being did that to us, how could it possibly alter our past? Food for thought. I figure, worlds either can’t be interved in from the outside, or time flow is absolute.

    Also, REM that if you believe the wave function is “real”, then time flow is preferred: a WF expands out from an emission point and then “vanishes” when absorbed – that would look wrong if emission and absorption were interchanged.

    “tyrannogenius”

  18. Low Math, Meekly Interacting

    Cool post! Thanks Dr. Carroll!

    I’ve read in several places that the “wavefunction of the universe”, one that satisfies the Wheeler-deWitt equation, anyway, is essentially atemporal. The universe just “is”, and talk about “initial” and “final” and everything in between needn’t apply. I’ve also read it somehow follows that time is an “emergent” property of this wavefunction.

    To put it very crudely, is it reasonable to conclude that we can only shed our “temporal chauvinism” by ditching time altogether? I’m completely dumbfounded by the notion of an “emergent” anything in the absence of a temporal measure by which I can determine something has “emerged” from something else. But there is this “emergent time” idea out there that I’ve encountered, and I wonder if you can comment on it!

  19. Why should Physics want to explain a contingent fact about a particular initial condition? Given an initial condition on a time-like hypersurface as a mathematical model for the universe, we can determine in which direction a dynamics (supposed here to be deterministic, whether applied to a quantum or to a classical state) causes the entropy to increase or to decrease. We can determine the answer to the same question on different time-scales and perhaps obtain different answers, but still we would get a graph of the evolution of entropy over time. We could also determine the answer to the same question for other dynamics and perhaps again obtain different answers.
    Put differently, the arrow of time is determined by initial conditions, together with other contingent facts; “explain” the initial conditions and you’ve explained the arrow of time, but “explain” the arrow of time and you have explained one bit of the total initial conditions.

    You may not get this second comment, but I worry about foundational arguments that invoke entropy extensively, partly because entropy is not a Lorentz invariant quantity. The Lorentz invariant quantum fluctuations of the vacuum make no contribution, for example. Entropy is the thermodynamic dual to thermal fluctuations, what is the thermodynamic dual to quantum fluctuations? Since an accelerating observer sees thermal fluctuations where an inertial observer sees only quantum fluctuations (the Unruh effect), presumably entropy is different also for relatively accelerating observers (and presumably also for observers in different gravitational environments).
    All the best with your metaphysics, nonetheless.

  20. Low Math, Meekly Interacting

    I guess I could pose my question more succinctly: If one has the explanation for the emergence of time per se, is it reasonable to expect one might get the arrow of time “for free”?

Comments are closed.

Scroll to Top