Arrow of Time FAQ

The arrow of time is hot, baby. I talk about it incessantly, of course, but the buzz is growing. There was a conference in New York, and subtle pulses are chasing around the lower levels of the science-media establishment, preparatory to a full-blown explosion into popular consciousness. I’ve been ahead of my time, as usual.

So, notwithstanding the fact that I’ve disquisitioned about this a great length and considerable frequency, I thought it would be useful to collect the salient points into a single FAQ. My interest is less in pushing my own favorite answers to these questions, so much as setting out the problem that physicists and cosmologists are going to have to somehow address if they want to say they understand how the universe works. (I will stick to more or less conventional physics throughout, even if not everything I say is accepted by everyone. That’s just because they haven’t thought things through.)

Without further ado:

What is the arrow of time?

The past is different from the future. One of the most obvious features of the macroscopic world is irreversibility: heat doesn’t flow spontaneously from cold objects to hot ones, we can turn eggs into omelets but not omelets into eggs, ice cubes melt in warm water but glasses of water don’t spontaneously give rise to ice cubes. These irreversibilities are summarized by the Second Law of Thermodynamics: the entropy of a closed system will (practically) never decrease into the future.

But entropy decreases all the time; we can freeze water to make ice cubes, after all.

Not all systems are closed. The Second Law doesn’t forbid decreases in entropy in open systems, nor is it in any way incompatible with evolution or complexity or any such thing.

So what’s the big deal?

In contrast to the macroscopic universe, the microscopic laws of physics that purportedly underlie its behavior are perfectly reversible. (More rigorously, for every allowed process there exists a time-reversed process that is also allowed, obtained by switching parity and exchanging particles for antiparticles — the CPT Theorem.) The puzzle is to reconcile microscopic reversibility with macroscopic irreversibility.

And how do we reconcile them?

The observed macroscopic irreversibility is not a consequence of the fundamental laws of physics, it’s a consequence of the particular configuration in which the universe finds itself. In particular, the unusual low-entropy conditions in the very early universe, near the Big Bang. Understanding the arrow of time is a matter of understanding the origin of the universe.

Wasn’t this all figured out over a century ago?

Not exactly. In the late 19th century, Boltzmann and Gibbs figured out what entropy really is: it’s a measure of the number of individual microscopic states that are macroscopically indistinguishable. An omelet is higher entropy than an egg because there are more ways to re-arrange its atoms while keeping it indisputably an omelet, than there are for the egg. That provides half of the explanation for the Second Law: entropy tends to increase because there are more ways to be high entropy than low entropy. The other half of the question still remains: why was the entropy ever low in the first place?

Is the origin of the Second Law really cosmological? We never talked about the early universe back when I took thermodynamics.

Trust me, it is. Of course you don’t need to appeal to cosmology to use the Second Law, or even to “derive” it under some reasonable-sounding assumptions. However, those reasonable-sounding assumptions are typically not true of the real world. Using only time-symmetric laws of physics, you can’t derive time-asymmetric macroscopic behavior (as pointed out in the “reversibility objections” of Lohschmidt and Zermelo back in the time of Boltzmann and Gibbs); every trajectory is precisely as likely as its time-reverse, so there can’t be any overall preference for one direction of time over the other. The usual “derivations” of the second law, if taken at face value, could equally well be used to predict that the entropy must be higher in the past — an inevitable answer, if one has recourse only to reversible dynamics. But the entropy was lower in the past, and to understand that empirical feature of the universe we have to think about cosmology.

Does inflation explain the low entropy of the early universe?

Not by itself, no. To get inflation to start requires even lower-entropy initial conditions than those implied by the conventional Big Bang model. Inflation just makes the problem harder.

Does that mean that inflation is wrong?

Not necessarily. Inflation is an attractive mechanism for generating primordial cosmological perturbations, and provides a way to dynamically create a huge number of particles from a small region of space. The question is simply, why did inflation ever start? Rather than removing the need for a sensible theory of initial conditions, inflation makes the need even more urgent.

My theory of (brane gasses/loop quantum cosmology/ekpyrosis/Euclidean quantum gravity) provides a very natural and attractive initial condition for the universe. The arrow of time just pops out as a bonus.

I doubt it. We human beings are terrible temporal chauvinists — it’s very hard for us not to treat “initial” conditions differently than “final” conditions. But if the laws of physics are truly reversible, these should be on exactly the same footing — a requirement that philosopher Huw Price has dubbed the Double Standard Principle. If a set of initial conditions is purportedly “natural,” the final conditions should be equally natural. Any theory in which the far past is dramatically different from the far future is violating this principle in one way or another. In “bouncing” cosmologies, the past and future can be similar, but there tends to be a special point in the middle where the entropy is inexplicably low.

What is the entropy of the universe?

We’re not precisely sure. We do not understand quantum gravity well enough to write down a general formula for the entropy of a self-gravitating state. On the other hand, we can do well enough. In the early universe, when it was just a homogenous plasma, the entropy was essentially the number of particles — within our current cosmological horizon, that’s about 1088. Once black holes form, they tend to dominate; a single supermassive black hole, such as the one at the center of our galaxy, has an entropy of order 1090, according to Hawking’s famous formula. If you took all of the matter in our observable universe and made one big black hole, the entropy would be about 10120. The entropy of the universe might seem big, but it’s nowhere near as big as it could be.

If you don’t understand entropy that well, how can you even talk about the arrow of time?

We don’t need a rigorous formula to understand that there is a problem, and possibly even to solve it. One thing is for sure about entropy: low-entropy states tend to evolve into higher-entropy ones, not the other way around. So if state A naturally evolves into state B nearly all of the time, but almost never the other way around, it’s safe to say that the entropy of B is higher than the entropy of A.

Are black holes the highest-entropy states that exist?

No. Remember that black holes give off Hawking radiation, and thus evaporate; according to the principle just elucidated, the entropy of the thin gruel of radiation into which the black hole evolves must have a higher entropy. This is, in fact, borne out by explicit calculation.

So what does a high-entropy state look like?

Empty space. In a theory like general relativity, where energy and particle number and volume are not conserved, we can always expand space to give rise to more phase space for matter particles, thus allowing the entropy to increase. Note that our actual universe is evolving (under the influence of the cosmological constant) to an increasingly cold, empty state — exactly as we should expect if such a state were high entropy. The real cosmological puzzle, then, is why our universe ever found itself with so many particles packed into such a tiny volume.

Could the universe just be a statistical fluctuation?

No. This was a suggestion of Bolzmann’s and Schuetz’s, but it doesn’t work in the real world. The idea is that, since the tendency of entropy to increase is statistical rather than absolute, starting from a state of maximal entropy we would (given world enough and time) witness downward fluctuations into lower-entropy states. That’s true, but large fluctuations are much less frequent than small fluctuations, and our universe would have to be an enormously large fluctuation. There is no reason, anthropic or otherwise, for the entropy to be as low as it is; we should be much closer to thermal equilibrium if this model were correct. The reductio ad absurdum of this argument leads us to Boltzmann Brains — random brain-sized fluctuations that stick around just long enough to perceive their own existence before dissolving back into the chaos.

Don’t the weak interactions violate time-reversal invariance?

Not exactly; more precisely, it depends on definitions, and the relevant fact is that the weak interactions have nothing to do with the arrow of time. They are not invariant under the T (time reversal) operation of quantum field theory, as has been experimentally verified in the decay of the neutral kaon. (The experiments found CP violation, which by the CPT theorem implies T violation.) But as far as thermodynamics is concerned, it’s CPT invariance that matters, not T invariance. For every solution to the equations of motion, there is exactly one time-reversed solution — it just happens to also involve a parity inversion and an exchange of particles with antiparticles. CP violation cannot explain the Second Law of Thermodynamics.

Doesn’t the collapse of the wavefunction in quantum mechanics violate time-reversal invariance?

It certainly appears to, but whether it “really” does depends (sadly) on one’s interpretation of quantum mechanics. If you believe something like the Copenhagen interpretation, then yes, there really is a stochastic and irreversible process of wavefunction collapse. Once again, however, it is unclear how this could help explain the arrow of time — whether or not wavefunctions collapse, we are left without an explanation of why the early universe had such a small entropy. If you believe in something like the Many-Worlds interpretation, then the evolution of the wavefunction is completely unitary and reversible; it just appears to be irreversible, since we don’t have access to the entire wavefunction. Rather, we belong in some particular semiclassical history, separated out from other histories by the process of decoherence. In that case, the fact that wavefunctions appear to collapse in one direction of time but not the other is not an explanation for the arrow of time, but in fact a consequence of it. The low-entropy early universe was in something close to a pure state, which enabled countless “branchings” as it evolved into the future.

This sounds like a hard problem. Is there any way the arrow of time can be explained dynamically?

I can think of two ways. One is to impose a boundary condition that enforces one end of time to be low-entropy, whether by fiat or via some higher principle; this is the strategy of Roger Penrose’s Weyl Curvature Hypothesis, and arguably that of most flavors of quantum cosmology. The other is to show that reversibilty is violated spontaneously — even if the laws of physics are time-reversal invariant, the relevant solutions to those laws might not be. However, if there exists a maximal entropy (thermal equilibrium) state, and the universe is eternal, it’s hard to see why we aren’t in such an equilibrium state — and that would be static, not constantly evolving. This is why I personally believe that there is no such equilibrium state, and that the universe evolves because it can always evolve. The trick of course, is to implement such a strategy in a well-founded theoretical framework, one in which the particular way in which the universe evolves is by creating regions of post-Big-Bang spacetime such as the one in which we find ourselves.

Why do we remember the past, but not the future?

Because of the arrow of time.

Why do we conceptualize the world in terms of cause and effect?

Because of the arrow of time.

Why is the universe hospitable to information-gathering-and-processing complex systems such as ourselves, capable of evolution and self-awareness and the ability to fall in love?

Because of the arrow of time.

Why do you work on this crazy stuff with no practical application?

I think it’s important to figure out a consistent story of how the universe works. Or, if not actually important, at least fun.

161 Comments

161 thoughts on “Arrow of Time FAQ”

  1. About white holes: they are just the time-reversal of black holes, and the two are definitely not the same thing, since black holes are not symmetric under time reversal (even when we take Hawking radiation into account). It’s correct to say that the reasons we find black holes but not white holes are ultimately thermodynamic in origin.

    Think about a real astrophysical black hole. It is born and evolves by having matter dumped into it. Ultimately it evaporates away via Hawking radiation, a thermal bath with temperature inversely proportional to the hole’s mass. So a white hole would be formed from an *inward* flux of thermal radiation — you would see nothing if you looked at the forming white hole, but you would see thermal particles coming mysteriously from the outside universe in a spherically symmetric configuration. The radiation would start out high-temperature, and gradually cool. Then the white hole would start spitting out highly non-thermal matter. All along the entropy would be decreasing.

  2. Neil

    In principle a universe might contain regions obeying different arrows of time, and still obey the same microscopic laws that we’re familiar with, but the bottom line is consistency: you can’t “change” anyone’s “past” if that really is their past, or you’re simply making contradictory claims about what happened at the relevant time and place. (Well, you could have a many-worlds structure that makes some kind of sense of that, but I’m talking classically.)

    I don’t know of any rigorous results on this, but I expect that regions obeying different arrows would necessarily be separated by borders that obeyed no arrow at all, and that people who were time-reversed with respect to each other couldn’t actually survive in each other’s environments. It’s fun to day-dream about scenarios where time-reversed people come into contact, and the kind of havoc that would play with their notion of free will … but like most time-travel scenarios, in reality you either have to “split” the universe and allow multiple histories, or simply accept that consistency rules and that crossing from one arrow to the other would most likely just be fatal. The one thing that’s certain is that a woman from Planet Clockwise couldn’t wander freely around Planet Anticlockwise like an actor blue-screened into a backwards-playing movie, watching eggs unscramble, while the locals witnessed her actions having the same comical effects. And even if you could find a physically possible history of the universe that looked like that, I suspect it would be incredibly rare and special among all universes with multiple arrows, most of which would instead have isolated pockets obeying their distinct arrows of time.

  3. Sean wrote:
    Think about a real astrophysical black hole. It is born and evolves by having matter dumped into it. Ultimately it evaporates away via Hawking radiation, a thermal bath with temperature inversely proportional to the hole’s mass. So a white hole would be formed from an *inward* flux of thermal radiation

    Would a white hole necessarily need to have such an inward flux of radiation? The original concept of a white hole was just a T-reversed black hole in a description based only on GR, right? So have there been any analyses of quantum field theory in the curved spacetime of a white hole to show that it would require such time-reversed Hawking radiation?

    As an analogy, you could in principle write a description of the orbits of planets in our solar system in GR terms, and the time-reversed version of this would also be a valid GR solution–but I think it would be completely permissible to have a solar system that looked like a time-reversed version of ours in terms of its GR description (all the moons and planets orbiting in opposite directions and so forth), yet which would have a “normal” arrow of time at the level of things like solar radiation. So I wonder if there are any rigorous physical arguments showing that you couldn’t have something that behaved like a time-reversed black hole in its own GR description (spitting matter and energy out of the event horizon, with nothing being able to enter) but which had a normal arrow of time in terms of Hawking radiation and other details.

    Along these lines, what do you think of the thought-experiment I suggested in post #71? In that post I imagined a giant supercomputer simulation of a black hole which is so detailed that it simulates every particle in its neighborhood, including all the photons of Hawking radiation (with the simulation’s rules perhaps based on some yet-undiscovered theory of quantum gravity), where we then take some later state of the simulation and reverse all the particle’s momenta as well as whatever else needs to be reversed in order to get a perfect time-reversed version of the original simulation’s run. This should result in a simulated white hole, but what if we now perturb the initial conditions of the simulation slightly, in a region outside the event horizon? Wouldn’t the perturbation eventually cause the arrow of time outside the hole to flip back to increasing-entropy (so that you would no longer see random photons from throughout space converging on the hole as time-reversed Hawking radiation), yet since the perturbation can’t affect anything inside the event horizon, wouldn’t the object continue to behave like a white hole, spitting matter and energy out rather than pulling it in? Shouldn’t this also be a valid solution to the equations of whatever fundamental theory is guiding the simulation?

  4. My theory? The missing link: you, the observer. 😉 Or wait, that’s called the anthropic principle, right? I see no reason why I couldn’t be a Boltzmann Brain. Or you, for that matter. But not really both…

  5. Jesse,

    Where else would the white hole come from? As Sean said, it’s a time-reversal of a black hole. Emission of Hawking Radiation is how black holes end, so absorption of the reverse of Hawking radiation would be how white holes begin.

  6. “Think about a real astrophysical black hole. It is born and evolves by having matter dumped into it. Ultimately it evaporates away via Hawking radiation, a thermal bath with temperature inversely proportional to the hole’s mass. So a white hole would be formed from an *inward* flux of thermal radiation — you would see nothing if you looked at the forming white hole, but you would see thermal particles coming mysteriously from the outside universe in a spherically symmetric configuration. The radiation would start out high-temperature, and gradually cool. Then the white hole would start spitting out highly non-thermal matter. All along the entropy would be decreasing.”

    There is a lot of thought in Seans posts, and it seems to me this is an excellent summary of the astrophysical process you are discussing.

    Whenever people discuss “time reversal” it makes me nervous because I think it is clear from field evidence the universe (we observe anyway!) has a single, single process time dimension.

    Although there is no “outside” to a GR universe, and such an observing frame of reference is not possible, the analogy of the merry go round is appropriate. Viewed from the side, people closer to us move in one diection, while people on the other side of the ride (really DO) move in the opposite direction…but there is no inverse process.

    It is kind of like an old fashioned 33RPM record where each time the record completes a 360 degree turn the needle finds itself in almost, but not quite the same location…hence the idea of the phylogenically developing quasi-static universe in which all infomation is inversely mapped and semi-permanent but subject to very gradual change.

    The Humpty Dumpty analogy is a good one. Humpty falls off the wall and since all the kings horses and all the kings men can’t put Humpty together again, we make an omlet! However when we feed the omlet which was “Humpty” to a chicken, it makes a perfect egg- just like Humpty Dumpty, with the same DNA and chemical structure…but just a few tiny differences. Since we couldn’t (in our universe anyway) compare the previous Humpty to his successor, it would be impossible to tell them apart. The egg, the chicken…all information continue perpetually even though the time process has an irreversable direction…

  7. Jason Dick wrote:
    Where else would the white hole come from? As Sean said, it’s a time-reversal of a black hole. Emission of Hawking Radiation is how black holes end, so absorption of the reverse of Hawking radiation would be how white holes begin.

    Well, a large black hole could also just be destroyed along with everything else in a Big Crunch, so shouldn’t it be possible in principle that a moderate-sized white hole would just have existed since the Big Bang? Also, the physics of the Planck scale probably isn’t well-enough understood to say exactly what happens to an evaporating black hole in its final moments, so presumably we also can’t say exactly how a planck-scale white hole might form…but once we have the smallest possible object that could still be called a white hole, then just as I’m not sure whether a macro-white hole would necessarily have to absorb time-reversed Hawking radiation or whether there might be other valid white hole solutions once you incorporate quantum effects into general relativity, I’m similarly not sure whether a micro-white-hole would require time-reversed Hawking radiation to make it grow, or whether there might be other ways it could grow (what if, instead of emitting normal matter and energy, it emitted exotic matter with negative energy? If you dump exotic matter into a black hole, does it grow or shrink?)

    In any case, my main question is about what is physically allowable behavior for an already-existing white hole, not how one would form in the first place. In GR you do have permanent black holes as an allowable solution, even if this is unrealistic in our universe. Of course GR alone does not include Hawking radiation which normally causes the black hole to have a finite lifetime, but I think if you confined a black hole to a finite mirrored box, there could be an “equilibrium” solution where the energy lost to Hawking radiation was balanced by the same radiation bouncing off the inside of the box and falling back into the black hole–I wonder, if one knew enough about quantum gravity to define the set of distinct “microstates” for this closed system, then if one picked a microstate randomly using a uniform probability distribution on the entire phase space, presumably you’d be equally likely to get a black-hole-at-equilibrium as a white-hole-at-equilibrium? Could you even distinguish the two at equilibrium?

  8. We usually assme that projecting a movie of Humpty Dumpty smashing on the kitchen floor in forward and reverse is a necessary indication of impossible time reversal with an equally impossible inverse process…but I’m not inclined to be overly quick in presuming that assumption is true…for reasons rooted in the Merry go Round analogy. The French have done a lot of work on the process of geometric inversion as it relates to a marginally closed geometry with a Schwarzschild metric in GR…and that work is very impressive.

  9. “Of course GR alone does not include Hawking radiation which normally causes the black hole to have a finite lifetime, but I think if you confined a black hole to a finite mirrored box, there could be an “equilibrium” solution where the energy lost to Hawking radiation was balanced by the same radiation bouncing off the inside of the box and falling back into the black hole–”

    Good thought! Hawking has done further work recently which indicates that there is no “information paradox”.

  10. Way to slip an allusion to Andrew Marvell in your explanation of Bolzmann’s and Schuetz’s suggestion. Nothing spices up science like a good literary reference.

  11. Pingback: it’s about time» Blog Archive » Time will tell…

  12. Sean said,

    How we do the coarse-graining to define which microstates are macroscopically equivalent is a classic question. My personal belief is that the choices we make to divide the space of states up into “equivalent” subspaces are not arbitrary, but are actually determined by features of the laws of physics. (For example, the fact that interactions are local in space.) The project of actually turning that belief into a set of rigorous results is far from complete, as far as I know.

    If I were a smarter person, I’d probably spend at least a little time trying to apply category theory to this problem (see this post by John Armstrong). It’s not hard to imagine a first step:

    Take a classical harmonic oscillator. It goes round and round in phase space, trading off position for momentum and vice versa. Build a category by taking the points in phase space as your objects and time-evolution operations as your morphisms. Ellipses in phase space — curves of constant energy — then become isomorphism classes, because the oscillator motion is periodic, and for any A and B connected by a morphism, you can find another time evolution which takes B back into A. Per Shang-Keng Ma, an entropy can be defined as the logarithm of the phase-space volume explored by the system over a given timescale; the states relevant for thermodynamics (mumble mumble microcanonical mumble mumble) would be the decategorification of the states used at the statistical-mechanical level.

    Coarse-graining might be represented as a functor, or something like that, establishing some kind of equivalence which lets you have a weaker notion of isomorphism. Locality and whatnot would then become conditions on the functors you can construct.

    Why is it I only think about category theory really late at night?

  13. Oh, and I’m proud to say that I recognized the Marvell allusion, too, although I had to get it second-hand, via Nicholas Meyer’s The Seven-Per-Cent Solution. Most of my “culture” is probably second- or third-hand, now that I think about it. . . .

  14. Well, a large black hole could also just be destroyed along with everything else in a Big Crunch, so shouldn’t it be possible in principle that a moderate-sized white hole would just have existed since the Big Bang?

    But since a white hole is the time reversal of a black hole, its entropy is continually decreasing. Now, presumably one could be constructed, if somebody so desired, though that would require an obscenely specific knowledge of the physics of black holes, as well as obscenely accurate methods of producing the input to the white hole. Even this may be impossible, however, if quantum decoherence messes things up.

    But I can’t imagine how a white hole could form naturally in a universe where globally entropy is increasing. The probabilities from it forming through random processes are just obscene (though, granted, a Planck-scale black hole may well be as likely to form as a Planck-scale white hole through vacuum fluctuations).

    presumably you’d be equally likely to get a black-hole-at-equilibrium as a white-hole-at-equilibrium? Could you even distinguish the two at equilibrium?

    No. This is the entire point of Sean’s argument that you have to resort to specific initial conditions to have a region of the universe where there exists a definite arrow of time: at equilibrium there is none. Any system in equilibrium is invariant under time reversal, and thus a black hole in equilibrium would be indistinguishable from its time reversal, a white hole in equilibrium (say, in an anti-de Sitter universe with no other matter, if I’m remembering correctly that the horizon of anti-de Sitter space acts much like a “mirror” for radiation).

  15. Jason wrote:

    But since a white hole is the time reversal of a black hole, its entropy is continually decreasing.

    I wonder if there’s a clear enough distinction being made here between two quite different scenarios:

    (1) You describe a universe containing a black hole, doing all the things a typical black hole does: being formed by a collapsing star, absorbing lots of incoming matter and gaining entropy, and then (eventually, over a very long time — assuming a cosmology such that the CMB becomes cooler than the black hole’s temperature) evaporating via Hawking radiation.

    You then time-reverse all of this together, and call it a universe with a white hole. But it’s not! Obviously if you time-reverse the whole universe, the cosmological arrow of time would be flipped along with everything else, and the “time reversal” would have no physical significance whatsoever. If we merely pretend that we’ve flipped the arrow of time while actually making no meaningful physical change, we just get a time-reversed description of our own universe with a black hole in it, which will obviously violate the Second Law and sound absurdly unlikely.

    (2) You describe the spacetime geometry of a black hole out to the point where spacetime becomes almost flat, and you time-reverse that region alone, without time-reversing the rest of the universe in which it is embedded. You can no longer make statements about the behaviour of the resulting white hole with regard to its environment merely by time-reversing the behaviour of a typical black hole, because you’ve changed the relationship between the black/white hole and its cosmological surroundings.

    For example, surely there is no compulsion for the white hole to emit low-entropy dust, or gas, or bits of companion stars, or even to undergo the reverse of a stellar collapse and disappear, just because that’s the time-reverse of what typical black holes do in collaboration with the rest of the universe. Nor, I think, should it be considered inevitable that a white hole could only be formed by an inverse of Hawking decay (admittedly it’s hard to account for its formation by any process at all, but like Jesse I’m still curious to know how a white hole might behave if we’re given one “for free” somehow, perhaps created in the Big Bang).

    And is it really true that Hawking radiation was derived without any reference to boundary conditions at infinity? I don’t know the answer to this (I’ve skimmed Hawking’s 1975 paper, “Particle Creation By Black Holes”, but I don’t have the background to follow it in detail), so I’m happy to be corrected — but if Hawking radiation actually relies on assumptions about the surrounding universe, then surely the white hole you get by flipping the black hole but not the surrounding universe need not be absorbing time-reversed Hawking radiation and violating the Second Law, it could instead be doing something much more sensible in the context of that surrounding universe.

  16. Jason Dick wrote:
    But since a white hole is the time reversal of a black hole, its entropy is continually decreasing.

    No, my whole argument is that you can’t simply assume that the only possible type of white hole is one that is a mirror of a normal black hole in every respect, including entropy, although obviously such a perfectly reversed black hole must be one physically allowable solution. It would be entirely consistent with T-symmetry if each of the following were allowed solutions to a theory incorporating both GR and quantum effects: black holes with increasing entropy, black holes with decreasing entropy, white holes with increasing entropy, and white holes with decreasing entropy.

    Think of my analogy of a solar system that is the gravitational time-reverse of our own from comment #78. Do you agree that all of the following are compatible with the laws of physics: a solar system with orbits just like ours and entropy increasing, a solar system with orbits just like ours and entropy decreasing, a solar system with orbits that look like the time-reverse of ours and entropy increasing, and finally a solar system with orbits that look like the time-reverse of ours and entropy decreasing? Isn’t it true that a description of a solar system using GR alone would ordinarily only deal with gravitational aspects of the solar system, not things like whether photons were streaming out of the sun or converging in on it, so that it would not distinguish between pairs of solar systems where all the orbits and bodies were identical but the thermodynamic arrow of time was different? (Obviously since all forms of energy curve spacetime you could incorporate solar radiation into a GR description of the solar system, but it’s such a minor contributer the curvature that I’m pretty sure this isn’t ordinarily done, just like pure GR descriptions of black holes ordinarily don’t bother computing the effects of Hawking radiation on the spacetime curvature.)

    Well, if the notion of white holes is based solely on time-reversing the GR solution that we call a black hole, then unless someone has actually calculated what quantum field theory predicts is going on near the horizon of the white hole spacetime as has been done with black holes, we can’t assume that the only possible solution is one where you have reverse Hawking radiation, although as I said before, T-symmetry does show that this must be one valid solution. I suppose if the original QFT analysis which showed Hawking radiation being emitted by a black hole was sufficient to prove that this was the only physically allowable thing that could go on near the horizon, that would show you must have reverse Hawking radiation near a white hole, but I doubt the physicists who were deriving Hawking radiation bothered to look for a QFT solution involving reversed Hawking radiation converging on the horizon of a black hole from outside, because probably the only way you could get this would be to impose a future low-entropy boundary condition which would seem highly unnatural in a realistic cosmological context.

    Also, I think my thought-experiment involving taking a time-reversed simulation of a black hole and then slightly perturbing it shows that it’s unlikely to be true that a white hole must have reversed Hawking radiation converging on it; since getting a simulation to have a reversed thermodynamic arrow requires such precise coordination among all the particles in your initial state, any small perturbation is likely to spoil it and give you a simulation where entropy is increasing as usual. But the perturbation can’t affect anything inside the horizon of the time-reversed black hole, so shouldn’t it continue to behave like a white hole even though on the outside it no longer has time-reversed Hawking radiation converging on the horizon?

    presumably you’d be equally likely to get a black-hole-at-equilibrium as a white-hole-at-equilibrium? Could you even distinguish the two at equilibrium?

    No. This is the entire point of Sean’s argument that you have to resort to specific initial conditions to have a region of the universe where there exists a definite arrow of time: at equilibrium there is none. Any system in equilibrium is invariant under time reversal, and thus a black hole in equilibrium would be indistinguishable from its time reversal, a white hole in equilibrium (say, in an anti-de Sitter universe with no other matter, if I’m remembering correctly that the horizon of anti-de Sitter space acts much like a “mirror” for radiation).

    But neither a black hole at equilibrium nor a white hole at equilibrium shows a thermodynamic arrow of time, and I thought the point of Sean’s argument was just to show that any arrow of time that’s a consequence of thermodynamics must depend on special initial conditions. A white hole and a black hole at equilibrium might potentially distinguishable in other ways, like the spacetime curvature–in pure GR there is no Hawking radiation so you can have a solution that looks like a stable black hole with nothing going in and nothing coming out, are you saying this spacetime is identical to a solution containing only a stable white hole with nothing coming out and nothing going in? I’d like to hear one of the resident GR experts on this site weigh in on this question…

  17. Jesse,

    I don’t qualify as a “GR expert”, but I know this much: on the event horizon of a Schwarzschild (eternal, uncharged, non-rotating) black hole, one half of the interior of the light cone pokes out from the event horizon, and the other half leads into the interior of the hole. The half that pokes out also leads backwards in time, by convention. If you reverse that convention — flip the sign of the t-coordinate — you get a white hole.

    So a black hole and a white hole embedded in the same universe are certainly distinguishable, because you can compare the directions in time of the outgoing light cones, and notice that they are different.

    However, if you have a black hole sitting in a static universe which contains nothing at all (or if you want to account for quantum effects, fill the universe with a heat bath of photons that match the black hole’s temperature), and a white hole sitting in a separate static universe which also contains nothing (or the same kind of heat bath), then those two universes and their contents are physically identical, and it’s meaningless to say that one contains a black hole and the other a white hole. Unless I’m utterly confused, the “black” or “white” label simply describes the relationship between the light cones and some externally defined arrow of time; if there is no such arrow, the label becomes meaningless.

  18. Thanks Greg. So it sounds like it’s plausible that for a closed system in a finite volume with enough mass to form a black hole, if we had a theory of quantum gravity to give us the set of distinct “microstates” making up the phase space, there might be meaningful distinction between microstates whose macro-description would be something like “a black hole at thermal equilibrium with its surroundings” and microstates with the macro-description “a white hole at thermal equilibrium with its surroundings”. I remember someone mentioned earlier in the comments that the wikipedia “white holes” article said that Hawking considered white holes and black holes to be the same in certain circumstances, and now that I look at that article it seems that his argument was also based on considering the two at equilibrium:

    In quantum mechanics, the black hole emits Hawking radiation, and so can come to thermal equilibrium with a gas of radiation. Since a thermal equilbrium state is time reversal invariant, Hawking argued that the time reverse of a black hole in thermal equilibrium is again a black hole in thermal equilibrium.[1] This implies that black holes and white holes are the same object. The Hawking radiation from an ordinary black hole is then identified with the white hole emission. Hawking’s semi-classical argument is reproduced in a quantum mechanical AdS/CFT treatment[2], where a black hole in Anti De Sitter space is described by a thermal gas in a gauge theory, whose time reversal is the same as itself.

    Of course, this still wouldn’t address the question of whether, in a closed system out of equilbrium, it is theoretically possible to have either an entropy-increasing white hole or an entropy-decreasing black hole (presumably the entropy-decreasing black hole would be very unlikely in a closed system that lacked a low-entropy future boundary condition, just like any other spontaneous decrease in entropy which is permitted theoretically).

  19. As Brett pointed out, the entropy of a completely specified state is exactly zero. If we assume that there is such a thing as the “wavefunction of the universe” – something I personally have my doubts about, but most quantum cosmologists seem to take for granted – then the entropy of the universe is zero, always: in the early universe, now, and in the far future. So what’s the problem?

  20. Robert (#96), as I understand it entropy is a property of macrostates, not of microstates. So when we talk about the entropy of some state, we mean the log of the number of states in the macrostate that contains that microstate. (As discussed above, this makes entropy dependent on how we partition the set of microstates into macrostates.)

    So in that sense, even though the universe is presumably in one particular microstate, the entropy of the universe is only zero if that microstate is the only microstate in its macrostate — that is, if the microstate is distinguishable from all other microstates by its macroscopic properties.

  21. Regarding Brett’s comments above, I’m not so sure he and Thomas are really saying the same thing. Certainly, as Brett said, the entropy is dependent on the partitioning of microstates into macrostates, so the question of “Why was the entropy of the early universe so low?” can be rephrased as “Why is the preferred partitioning of microstates into macrostates one that makes the entropy of the early universe so low?”

    But Thomas seemed to be saying something else. He seemed to say the 2nd law is to be expected, and gave the example of a configuration of billiard balls with random trajectories increasing in entropy. I think this is just the argument that with more high-entropy states than low entropy-states, you’re statistically more likely to move to a high entropy state. However, you’d also be more likely to start in a state of high entropy.

    For simplicity, let’s pretend our system has only two macrostates, which we’ll call “low entropy” or “high entropy”. If we start in a random state and evolve for some time T, it seems there are four basic possibilities:
    (1) You start in a low entropy state (unlikely), and end up in another low entropy state (unlikely)
    (2) You start in a low entropy state (unlikely) and end up in a high entropy state (likely)
    (3) You start in a high entropy state (likely) and end up in a low entropy state (unlikely)
    (4) You start in a high entropy state (likely) and end up in a high entropy state (likely).

    (Here “likelihood” means the percentage of microstates in that macrostate which increase/decrease in entropy.)

    So this is consistent with the idea that there are equally many paths from low to high entropy (2) as from high to low entropy (3). Nevertheless, from any given initial state (whether low or high entropy), entropy decreasing (or staying at minimum) is less likely than entropy increasing (or staying at maximum).

    So in some sense both Thomas and Sean are correct. As per Thomas: For a given initial macrostate, we expect entropy to increase (or at least stay the same). As per Sean: For a randomly chosen initial microstate, increase and decrease are equally likely. The point is that we have a lot of states with a small “probability” of entropy decrease, and a few states with a large “probability” of entropy increase. (What I mean is there’s a small macrostate with a large fraction of its microstates increasing entropy, and there’s a large macrostate with a small fraction of its microstates decreasing entropy.)

    So if the microstate of the universe was chosen at random, we probably shouldn’t be surprised that it’s in a macrostate where over any particular choice of time T most microstates increase entropy. But before we knew our macrostate, we wouldn’t have expected our particular microstate to increase entropy over any given time T.

    So if by the Second Law of Thermodynamics we mean that entropy increase has a high (Bayesian) probability given that the universe is in some particular macrostate, then it’s not surprising. If by the Second Law of Thermodynamic, we mean that the entropy increase has a high a priori probability (i.e., for any microstate), then it is surprising.

    Either way, we should definitely be surprised that the initial state of the universe had such a low entropy, but I’m not sure anyone here is disputing this.

  22. Pingback: The arrow of time FAQ « Later On

  23. Of course, that’s kind of off the top of my head. I’m not so sure my two-macrostate example really generalizes to many macrostates. Also maybe I’m blurring the line between “entropy increasing” and “entropy increasing or staying the same”. As you get higher up in entropy, you can’t really increase much, so even if less than 50% of the microstates in that macrostate decrease in entropy, maybe the expected entropy change is negative. In that case, I guess even for a given macrostate the second law really is a consequence of us being in a low entropy macrostate. (That is, if by the Second Law we mean that the expected change in entropy is non-negative, rather than the probability of entropy decrease is greater than 50%. These aren’t the same thing — one is the probability distribution for entropy change, one is entropy change weighted by that probability distribution.)

    That’s all assuming that there is a maximum entropy state of the universe. It seems to me that if there were an endless tower of higher entropy states, then every macrostate might have most of its states strictly increasing in entropy, despite there being a one-to-one corrsepondence between entropy-increasing microstates and entropy-decreasing microstates. Is there a maximum entropy state of the universe? I don’t have a clue — presumably, there are only so many configurations for the (fixed) amount of energy in the universe, but maybe not if the size of the universe isn’t fixed.

Comments are closed.

Scroll to Top