Arrow of Time FAQ

The arrow of time is hot, baby. I talk about it incessantly, of course, but the buzz is growing. There was a conference in New York, and subtle pulses are chasing around the lower levels of the science-media establishment, preparatory to a full-blown explosion into popular consciousness. I’ve been ahead of my time, as usual.

So, notwithstanding the fact that I’ve disquisitioned about this a great length and considerable frequency, I thought it would be useful to collect the salient points into a single FAQ. My interest is less in pushing my own favorite answers to these questions, so much as setting out the problem that physicists and cosmologists are going to have to somehow address if they want to say they understand how the universe works. (I will stick to more or less conventional physics throughout, even if not everything I say is accepted by everyone. That’s just because they haven’t thought things through.)

Without further ado:

What is the arrow of time?

The past is different from the future. One of the most obvious features of the macroscopic world is irreversibility: heat doesn’t flow spontaneously from cold objects to hot ones, we can turn eggs into omelets but not omelets into eggs, ice cubes melt in warm water but glasses of water don’t spontaneously give rise to ice cubes. These irreversibilities are summarized by the Second Law of Thermodynamics: the entropy of a closed system will (practically) never decrease into the future.

But entropy decreases all the time; we can freeze water to make ice cubes, after all.

Not all systems are closed. The Second Law doesn’t forbid decreases in entropy in open systems, nor is it in any way incompatible with evolution or complexity or any such thing.

So what’s the big deal?

In contrast to the macroscopic universe, the microscopic laws of physics that purportedly underlie its behavior are perfectly reversible. (More rigorously, for every allowed process there exists a time-reversed process that is also allowed, obtained by switching parity and exchanging particles for antiparticles — the CPT Theorem.) The puzzle is to reconcile microscopic reversibility with macroscopic irreversibility.

And how do we reconcile them?

The observed macroscopic irreversibility is not a consequence of the fundamental laws of physics, it’s a consequence of the particular configuration in which the universe finds itself. In particular, the unusual low-entropy conditions in the very early universe, near the Big Bang. Understanding the arrow of time is a matter of understanding the origin of the universe.

Wasn’t this all figured out over a century ago?

Not exactly. In the late 19th century, Boltzmann and Gibbs figured out what entropy really is: it’s a measure of the number of individual microscopic states that are macroscopically indistinguishable. An omelet is higher entropy than an egg because there are more ways to re-arrange its atoms while keeping it indisputably an omelet, than there are for the egg. That provides half of the explanation for the Second Law: entropy tends to increase because there are more ways to be high entropy than low entropy. The other half of the question still remains: why was the entropy ever low in the first place?

Is the origin of the Second Law really cosmological? We never talked about the early universe back when I took thermodynamics.

Trust me, it is. Of course you don’t need to appeal to cosmology to use the Second Law, or even to “derive” it under some reasonable-sounding assumptions. However, those reasonable-sounding assumptions are typically not true of the real world. Using only time-symmetric laws of physics, you can’t derive time-asymmetric macroscopic behavior (as pointed out in the “reversibility objections” of Lohschmidt and Zermelo back in the time of Boltzmann and Gibbs); every trajectory is precisely as likely as its time-reverse, so there can’t be any overall preference for one direction of time over the other. The usual “derivations” of the second law, if taken at face value, could equally well be used to predict that the entropy must be higher in the past — an inevitable answer, if one has recourse only to reversible dynamics. But the entropy was lower in the past, and to understand that empirical feature of the universe we have to think about cosmology.

Does inflation explain the low entropy of the early universe?

Not by itself, no. To get inflation to start requires even lower-entropy initial conditions than those implied by the conventional Big Bang model. Inflation just makes the problem harder.

Does that mean that inflation is wrong?

Not necessarily. Inflation is an attractive mechanism for generating primordial cosmological perturbations, and provides a way to dynamically create a huge number of particles from a small region of space. The question is simply, why did inflation ever start? Rather than removing the need for a sensible theory of initial conditions, inflation makes the need even more urgent.

My theory of (brane gasses/loop quantum cosmology/ekpyrosis/Euclidean quantum gravity) provides a very natural and attractive initial condition for the universe. The arrow of time just pops out as a bonus.

I doubt it. We human beings are terrible temporal chauvinists — it’s very hard for us not to treat “initial” conditions differently than “final” conditions. But if the laws of physics are truly reversible, these should be on exactly the same footing — a requirement that philosopher Huw Price has dubbed the Double Standard Principle. If a set of initial conditions is purportedly “natural,” the final conditions should be equally natural. Any theory in which the far past is dramatically different from the far future is violating this principle in one way or another. In “bouncing” cosmologies, the past and future can be similar, but there tends to be a special point in the middle where the entropy is inexplicably low.

What is the entropy of the universe?

We’re not precisely sure. We do not understand quantum gravity well enough to write down a general formula for the entropy of a self-gravitating state. On the other hand, we can do well enough. In the early universe, when it was just a homogenous plasma, the entropy was essentially the number of particles — within our current cosmological horizon, that’s about 1088. Once black holes form, they tend to dominate; a single supermassive black hole, such as the one at the center of our galaxy, has an entropy of order 1090, according to Hawking’s famous formula. If you took all of the matter in our observable universe and made one big black hole, the entropy would be about 10120. The entropy of the universe might seem big, but it’s nowhere near as big as it could be.

If you don’t understand entropy that well, how can you even talk about the arrow of time?

We don’t need a rigorous formula to understand that there is a problem, and possibly even to solve it. One thing is for sure about entropy: low-entropy states tend to evolve into higher-entropy ones, not the other way around. So if state A naturally evolves into state B nearly all of the time, but almost never the other way around, it’s safe to say that the entropy of B is higher than the entropy of A.

Are black holes the highest-entropy states that exist?

No. Remember that black holes give off Hawking radiation, and thus evaporate; according to the principle just elucidated, the entropy of the thin gruel of radiation into which the black hole evolves must have a higher entropy. This is, in fact, borne out by explicit calculation.

So what does a high-entropy state look like?

Empty space. In a theory like general relativity, where energy and particle number and volume are not conserved, we can always expand space to give rise to more phase space for matter particles, thus allowing the entropy to increase. Note that our actual universe is evolving (under the influence of the cosmological constant) to an increasingly cold, empty state — exactly as we should expect if such a state were high entropy. The real cosmological puzzle, then, is why our universe ever found itself with so many particles packed into such a tiny volume.

Could the universe just be a statistical fluctuation?

No. This was a suggestion of Bolzmann’s and Schuetz’s, but it doesn’t work in the real world. The idea is that, since the tendency of entropy to increase is statistical rather than absolute, starting from a state of maximal entropy we would (given world enough and time) witness downward fluctuations into lower-entropy states. That’s true, but large fluctuations are much less frequent than small fluctuations, and our universe would have to be an enormously large fluctuation. There is no reason, anthropic or otherwise, for the entropy to be as low as it is; we should be much closer to thermal equilibrium if this model were correct. The reductio ad absurdum of this argument leads us to Boltzmann Brains — random brain-sized fluctuations that stick around just long enough to perceive their own existence before dissolving back into the chaos.

Don’t the weak interactions violate time-reversal invariance?

Not exactly; more precisely, it depends on definitions, and the relevant fact is that the weak interactions have nothing to do with the arrow of time. They are not invariant under the T (time reversal) operation of quantum field theory, as has been experimentally verified in the decay of the neutral kaon. (The experiments found CP violation, which by the CPT theorem implies T violation.) But as far as thermodynamics is concerned, it’s CPT invariance that matters, not T invariance. For every solution to the equations of motion, there is exactly one time-reversed solution — it just happens to also involve a parity inversion and an exchange of particles with antiparticles. CP violation cannot explain the Second Law of Thermodynamics.

Doesn’t the collapse of the wavefunction in quantum mechanics violate time-reversal invariance?

It certainly appears to, but whether it “really” does depends (sadly) on one’s interpretation of quantum mechanics. If you believe something like the Copenhagen interpretation, then yes, there really is a stochastic and irreversible process of wavefunction collapse. Once again, however, it is unclear how this could help explain the arrow of time — whether or not wavefunctions collapse, we are left without an explanation of why the early universe had such a small entropy. If you believe in something like the Many-Worlds interpretation, then the evolution of the wavefunction is completely unitary and reversible; it just appears to be irreversible, since we don’t have access to the entire wavefunction. Rather, we belong in some particular semiclassical history, separated out from other histories by the process of decoherence. In that case, the fact that wavefunctions appear to collapse in one direction of time but not the other is not an explanation for the arrow of time, but in fact a consequence of it. The low-entropy early universe was in something close to a pure state, which enabled countless “branchings” as it evolved into the future.

This sounds like a hard problem. Is there any way the arrow of time can be explained dynamically?

I can think of two ways. One is to impose a boundary condition that enforces one end of time to be low-entropy, whether by fiat or via some higher principle; this is the strategy of Roger Penrose’s Weyl Curvature Hypothesis, and arguably that of most flavors of quantum cosmology. The other is to show that reversibilty is violated spontaneously — even if the laws of physics are time-reversal invariant, the relevant solutions to those laws might not be. However, if there exists a maximal entropy (thermal equilibrium) state, and the universe is eternal, it’s hard to see why we aren’t in such an equilibrium state — and that would be static, not constantly evolving. This is why I personally believe that there is no such equilibrium state, and that the universe evolves because it can always evolve. The trick of course, is to implement such a strategy in a well-founded theoretical framework, one in which the particular way in which the universe evolves is by creating regions of post-Big-Bang spacetime such as the one in which we find ourselves.

Why do we remember the past, but not the future?

Because of the arrow of time.

Why do we conceptualize the world in terms of cause and effect?

Because of the arrow of time.

Why is the universe hospitable to information-gathering-and-processing complex systems such as ourselves, capable of evolution and self-awareness and the ability to fall in love?

Because of the arrow of time.

Why do you work on this crazy stuff with no practical application?

I think it’s important to figure out a consistent story of how the universe works. Or, if not actually important, at least fun.

161 Comments

161 thoughts on “Arrow of Time FAQ”

  1. Excellent thread! See also:

    Philip Vos Fellman, Jonathan Vos Post, “Time and Classical and Quantum Mechanics and the Arrow of Time” WP# 01-2005-01, Ongoing Research Papers and Conference Proceedings of the International Business Department, Southern New hampshire University,
    IBML Working Papers Series.

    Paper presented at the annual meeting of the North American Association for Computation in the Social and Organizational Sciences, Carnegie Mellon University, June 27-29, 2004.

    Abstract: In thinking about information theory at the quantum mechanical level, our [the authors’] discussion, largely confined to Jonathan’s back yard, often centers about intriguing but rather abstract conjectures. My personal favorite, an oddball twist on some of the experiments connected to Bell’s theorem, is the question, “is the information contained by a pair of entangled particles conserved if one or both of the particles crosses the event horizon of a black hole? It is in this context, and in our related speculation about some of the characteristics of what might eventually become part of a quantum mechanical explanation of information theory that we first encountered the extraordinary work of Peter Lynds. This work has been reviewed elsewhere, and like all novel ideas, there are people who love it and people who hate it. One of the main purposes in having Peter here is to let this audience get acquainted with his theory first-hand rather than through an interpretation or argument made by someone else. In this regard, I’m not going to be either summarizing his arguments or providing a treatment based upon the close reading of his text. Rather, I will mention some areas of physics where, to borrow a phrase from Conan-Doyle, it may be an error to theorize in advance of the facts. In particular, I should like to bring the discussion to bear upon various arguments concerning “the arrow of time.” In so doing, I will play the skeptic, if not the downright “Devil’s Advocate” (perhaps Maxwell’s Demon’s advocate would be more precise) and simply question why we might not be convinced that there is an “arrow” of time at all.

  2. As to Time being an illusion, albeit a persistent one, before Einstein we had McTaggart.

    John McTaggart Ellis McTaggart [1866-1925] was a Fellow of Trinity College, Lecturer in Moral Sciences, and a Nonreductionist. He was the author of “Studies in Hegelian Cosmology. The Philosophy of Hegel” [Dissertation, 1898; 1901; Garland, 1984]. This work explored application of a priori conclusions derived from the investigation of pure thought to empirically-known subject matter; human immortality; the absolute; the supreme good and the moral criticism; punishment; sin; and the conception of society as an organism. McTaggart was controversial for claiming that time was unreal: “The Nature of Existence” [Cambridge University Press,
    1921]; “The Unreality of Time” [Mind, vol. XVII].

  3. “Why was the entropy of the early universe so small?” Because it started from nothing (zero entropy). “Why was the size of the early universe so small?” Because it started from nothing.

    🙂

  4. Something I’ve been wondering: is there a relativistic definition of entropy, and of the second law (i.e., something that can be expressed in terms of invariants)? I’m having trouble even deciding what form it would take. It doesn’t even seem to me like the state of an extended system can be a frame-independent concept. References would be welcome.

  5. Pingback: A Waste-Book · My del.icio.us bookmarks for December 4th

  6. “The arrow of time is hot, baby”

    Too right!

    Hya Sean and others,

    Talking of time, well a while ago when I first posted here, I came out with something like, “There’s nothing like the real world” (my first post here) and you know what? That’s all I said!

    I am re-introducing myself and say, I have followed this blog for a while and I like what I see, if you know what I mean, then that’s all right with me mate! I am not going to post an awfull lot, just read.

    I am one of those arm chair idiots who, having studied elementary level in it a few years ago, ends up reading about physics and science as a hobby (but I am actually in love with it really).

    One good thing to look at is the physics of the brain with regard to the arrow of time. Are there are any time arrows in the brain? (I could use more complex wording but…) to what extent could SR be temporal oriented?

    So, now I am just wondering, where has the arrow of time gone while posting here…

    …ah, got it, it’s just here!

    Yours

    Claire

  7. You can turn an omelet into an egg if you feed it to a chicken.

    Nope, doesn’t work: the egg that the chicken makes won’t be made from the molecules that made up the original egg. Rather, some components of the omelet will be made use of for making the egg, some will be used for other metabolic purposes. Many components of omelet will pass through the chicken undigested (since a chicken’s digestive system didn’t evolve to digest eggs). Some components of the new egg will come from other food sources.

  8. Regarding Loschmidt (“How can you get the T-violating Boltzmann
    equation from T-reversal invariant dynamics?”), see comments
    28, 29 (Sean), 35 (Jesse):

    The difference between low entropy initial conditions and
    time-reversed initial conditions (evolve low S forward, then
    time reverse) is that the former are robust (stable against
    small perturbations (noise, error, loss of information), while
    the latter are extremely sensitive to small perturbations. In
    terms of Liouville’s theorem, the former occupy a smooth volume
    in phase space (stable under coarse graining) while the latter
    live in a very highly filamented part of phase space (not stable
    under coarse graining).

    In hist post, Sean allows for the fact that you can “derive it [the
    2nd law] under some reasonable-sounding assumptions” but goes on
    to say that these “reasonable-sounding assumptions are typically
    not true of the real world”. I fail to understand what this means;
    in my kitchen these assumptions appear to be satisfied, and I think
    my kitchen is pretty typical of the real world.

  9. thomas, of course there is a difference between the two sets of states: after all, one is low-entropy, and one is high-entropy! But they are equally likely; they occupy precisely equal volumes in phase space. That’s just Liouville’s theorem. A randomly-chosen microstate is equally likely to be in either set.

    The thing that is untrue in your kitchen is the set of assumptions used to prove the H-theorem, not the Second Law itself. In particular, there certainly are correlations between momenta of the molecules in your kitchen — precisely those correlations that reflect the system’s lower-entropy past, as you yourself just explained. The reason why the Second Law works is not because molecular chaos is a valid description of the molecules in your kitchen, it’s because there is a low-entropy boundary condition in the past. It’s easy to “derive” the Second Law by making assumptions that aren’t true, even if the law itself is.

    Sorry to harp on you, but you are emphasizing exactly the mistakes that many people have been making for many decades, and we should have moved past them by now.

  10. Count Iblis:

    Yes, I think a model in which the entropy is a minimum at some ‘time’, then increases in both (coordinate) time directions away from this — so that observers see the AOT pointing away in both ‘directions’ — is a very interesting one. This is in fact part of the core of what Sean thinks (as I understand it; the other part being that the maximum possible entropy of the universe is infinite so that it can and does increase indefinitely without reaching equilibrium). For an extensive discussion of this idea and models that employ it, you my want to look at this review article that I just posted.

  11. efp:
    No, entropy is not Lorentz invariant, nor is temperature, to which it is the thermodynamic dual. What is Lorentz invariant, in quantum field theory, is the quantum vacuum; the question of whether there really are quantum fluctuations is problematic, but if we take it that there are quantum fluctuations as well as thermal fluctuations, we should be able to introduce a Lorentz invariant “quantum entropy” as a thermodynamic dual to Planck’s constant — which on the view I take in my (journal published, see my web-site) papers is a measure of quantum fluctuations, just as temperature is a measure of thermal fluctuations.
    If we introduce independent measures of Lorentz invariant and Lorentz non-invariant entropy, the ways in which they affect measurement of physical processes when both measures of entropy are non-zero are non-trivial.
    See also the second comment in my post 24 above, which did not lead to further discussion at the time.
    If you find references to a non-trivial Lorentz invariant definition of entropy, please let me know. I don’t know of any, and referees have not yet pointed out any either (though my papers have probably not yet talked about entropy explicitly enough to excite referee comments on the existing literature that I ought to have read).

  12. Sean-

    If Thomas’s argument is fallacious, then so is yours. Here’s why: Consider the specific state of our universe; what is its entropy? The answer is zero, because the entropy of any completely characterized state is zero. In this regard, our universe is exactly as likely as what you would call a “high entropy universe.” So our universe is no less likely than any other. You are identifying our particular universe as unlikely because it is part of a macroscopic ensemble that, when measurements are coarse-grained, has low entropy. But once you introduce coarse-graining, Thomas’ stability argument is absolutely correct. The coarse graining contains information about what basis you are using to characterize the entropy. It’s true that Thomas’ coarse graining presupposes that there is lower entropy in the past, but so does yours. The “correct” coarse graining to use is really determined by what uncertainties there are in our measurement procedures, and the existence of such procedures is crucially tied to the low entropy of the past.

    There is a deep question here about the difference between statistical/informational entropy and thermodynamic entropy. The subtle distinction between them has come to the fore recently in discussions of whether there is a fundamental upper bound on the entropy in terms of the viscosity for any substance. The answer to the question is, for the Boltzmann entropy, “no.” A system can have its entropy made arbitrarily high by adding new uncertainties in its composition. However, the statistical entropy derived from this is not the same as the thermodynamic entropy that Clausius would have used to characterize the system.

    I don’t think either Sean or Thomas is completely right or completely wrong, but both are prating too much about the elephant.

  13. Sean, thanks for your answers (post 21) to my questions (post 17). With regard to (1) and (2), I guess I would think that so long as “That’s just the way it is” is a possible answer, we’d need an experimental test to distinguish any model from the possiblity that there is no underlying explanation.

    That is, it’d be nice if we could show some evidence that that model was a better explanation than just saying that’s the way it is. Otherwise, why believe in the model? As you point out in your posts on religion, saying “If A is true it would explain B” is only a good argument for believing A if we have a reason to believe B should have an explanation.

    Of course, I suppose you have to find the model before you can figure out how to test it — or at least find some general features such a model should have that are testable.

  14. Claire,

    One good thing to look at is the physics of the brain with regard to the arrow of time. Are there are any time arrows in the brain?

    From one neophyte to another, the arrow of time for the brain is from past events to future ones, while the arrow of time for the mind, since it records these events, is from future potential to past circumstance. Think in terms of how fast what we write recedes into the past….

  15. Brett and thomas– I think I must not be making myself clear, because the claim I am making is so obviously true that I can’t believe anyone can both understand it and disagree with it. I am not claiming that the Second Law isn’t true, or that entropy doesn’t increase in our kitchens. I am not claiming that Boltzmann’s assumption of molecular chaos (no correlations between particle momenta) doesn’t allow you to derive the H-theorem. I am not claiming that the Boltzmann equation, or the entire apparatus of kinetic theory, doesn’t do a good job at explaining real physical phenomena.

    What I am claiming is that Boltzmann’s assumption of molecular chaos, which is used in the proof of the H-theorem, is not true in the real world. (Or similar statements concerning other attempts to “derive” the second law.) There certainly are correlations in particle momenta, and everyone agrees that there are — if there weren’t, the entropy would have been higher in the past. Which it wasn’t.

    Boltzmann’s arguments “work” (in the sense that you derive equations that seem to correctly predict the behavior of real gasses) because there is no special boundary condition in the future. But that doesn’t mean that his arguments are “right,” in the sense of providing the actual reason why entropy increases.

    Entropy increases because it was very low in the early universe. Molecular chaos is completely beside the point.

  16. Sean-

    I specifically said that you weren’t completely wrong. It’s absolutely true that entropy is increasing because it was low to start with. But your argument that boundary conditions with entropy decreasing are just as natural as those with entropy increasing doesn’t parse. Entropy is a entirely a product of coarse graining, and there is not just one possible coarse graining. Rather, how we coarse grain is a product of what measurements we can make–what information we can extract from a system.

    If we were Boltzmann brains, the very fact that we were extracting information would mean we would always coarse grain in a fashion to make it appear that entropy is increasing. Since we are not such ephemeral fluctuations, there really is a question of why entropy was so low to begin with–or why some coarse grainings are strongly preferred. But you can’t sweep the issue Thomas raises under the rug by claiming that your preferred coarse graining is natural while his preferred boundary initial condition is not.

  17. Pingback: Sean’s experimental science in a space he can’t access « Society with Jimmy Crankn

  18. Miller (#68):

    In the spacetime geometry of a black hole, at the event horizon the only timelike vectors that point radially outwards also point backwards in time (in the sense defined as “backwards” for the external universe). So to escape from a black hole, you either have to travel along a spacelike vector (i.e. travel faster than light), or you have to travel backwards in time — which doesn’t violate any physical laws, but is essentially impossible for thermodynamic reasons. (By “travel backwards in time”, I don’t mean jump in some magic machine and emerge in the past, I mean experience everything along your world line backwards, remembering what other people consider to be the future. This is physically possible in principle, but the environmental boundary conditions make it impossible in practice.)

    The equations of general relativity also permit a complete time-reversal of the black hole spacetime, known as a white hole, in which you would have no choice at the event horizon but to travel outwards. The reasons there are (very probably) only black holes rather than white holes in our universe are ultimately thermodynamic in nature, related to all the other aspects of the arrow of time discussed on this thread.

  19. Greg Egan wrote:
    The reasons there are (very probably) only black holes rather than white holes in our universe are ultimately thermodynamic in nature, related to all the other aspects of the arrow of time discussed on this thread.

    Is it guaranteed to be true that a “white hole” would have to look like the reverse of a black hole in all respects, including quantum phenomena like Hawking radiation? Obviously it must be possible to have such a completely time-reversed black hole just by T-symmetry, but I wonder if there are clear arguments in “white hole thermodynamics” that would rule out a different kind of white hole that was increasing the entropy of the region it was sitting in rather than decreasing it. If it was possible to have an entropy-increasing white hole then you’d need additional arguments to explain why we don’t see any, but if a white hole would require photons from its surroundings to converge on its event horizon as time-reversed Hawking radiation, then I suppose the absence of white holes could then be explained on thermodynamic grounds alone.

    Thinking along these lines, it’s interesting to consider the argument made by Neil B. in post #22 about “intervening” in a time-reversed world (to make this slightly less fantastical, consider a giant supercomputer simulation of a given universe in which we run it for a while, take some later state and then reverse the momenta of every particle, then evolve the simulation forward and see the arrow of time running backwards–what happens if you then perturb the simulation at some point during its evolution?) If we made such an intervention in the neighborhood of a white hole, then presumably the perturbation would cause the arrow of time outside the white hole to flip to the “normal” direction again, but what would happen to the white hole itself? Our perturbation couldn’t possibly have any effect on anything inside its event horizon, since nothing can enter a white hole event horizon, so everything inside the horizon would presumably carry on in its usual time-reversed fashion, and the white hole would continue to spit out matter rather than pull it in, yet outside the white hole we wouldn’t see time-reversed Hawking radiation. This would seem to be an argument in favor of the notion that you could have an entropy-increasing white hole, but obviously it’s not too rigorous so I’m not sure.

  20. Jesse (#71):

    My original comment about white holes being improbable was meant on a purely classical level. One problem for a classical white hole (in our own particular universe) is explaining where it’s supposed to have come from — did it just appear as part of the Big Bang?

    I know almost no QFT, so my guesses about Hawking radiation and white holes could be way off the mark, but for what it’s worth my hunch is that the external universe’s arrow of time is part of the reason a black hole emits Hawking radiation, and that a white hole in our universe would actually emit thermal radiation of the same temperature. Temperature is unchanged by time-reversal, so a time-reversed black hole should yield a white hole of the same temperature — and unless we’re time-reversing the whole universe, there’s no reason for the white hole to be lowering the entropy of its surroundings.

    The Wikipedia article on White holes attributes an argument that sounds a bit like this to Hawking, going even further and suggesting that (once you account for quantum effects) “black holes and white holes are the same object”. I haven’t read what Hawking actually wrote, but this seems to be implying that the whole distinction between emitted Hawking radiation and infalling objects is a thermodynamic one, tied to the arrow of time in the external universe.

    Hopefully an expert who actually knows about this stuff will comment …

  21. Greg’s comments are very interesting.

    From a conceptual viewpoint, and, I beleive in fact, in, Wikipedia’s comment that “black holes and white holes are the same object” is completely correct.

    The key is the geometry of the system, and the coordiates of observation one happens to select within these coordinates. We measure particulate existence within 3 spaces, however, in an absolute sense, the observer feels himself/herself to be at the center of the geomety, and sees only “inward” and “outwad”, both at 360 degrees…from the extreme macroscopic to the sub-microscopic.

    What is a “Big Bang” at the astronomical antipode, becomes “photons” at the sub-microscopic antipode. Supermassive “Black Holes” at the macroscopic antipode become massed singular space at the quantum Planck Realm level of scale, but the entire universal system is interlinked and quasi-static…it is permanently existing.

    What we observe as “Time” is probably a general, extremely gravitationally time dilated proper time pulse of the universe. Hawking said: “The universe just is”. Einstein said: “Time is an Illusion”. I don’t completely agree with Einstein…I think Hawking said it a little better, because, time- and existence, even if they are “illusory”, are VERY real…not philosophical at all…unless freezing to death or dying in an airplane crash are “philosophical”. From a quantum perspective, the universes existence depends on its observation.

    A last comment on entropy: The application of photons…electromagnetic energy, to the biosphere of the Earth, has resulted in the development of organic, informational, complexity. Thus, we can observe the influence of submicroscopic white holes right here on Earth having a localized downward effect on entropy. However, in the part of the universe we observe, the general drift of thermal entropy is upward while informational entropy (complexity- both inorganic and organic) decreases.

    It is very important we NOT regard the universe as a “void”…devoid of complexity except at certain very limited coordinates. Particle groups, Baryonic diversity and proportion, as well as the behavioral characteristics of Baryonic matter are knds of informational complexity which uniformly pervade the universe from one side to the other…and make possible observational organic complexities very existence.

    A very interesting thread!

  22. Greg, Jesse, Sam, or anyone: What do you think of the thought experiments I put forth in #22? That sort of macroscopic what-if makes you think about the question, if time flow could even be reversible in principle, then how can we have definitely “been through” a real past? (Aside from how exactly we can know it.)

  23. Sorry, I’ve been away from the internet for a while. Brett, I don’t know exactly what you mean by “the issue Thomas raises,” so I don’t know how to respond. Thomas seemed to object to the claim that the origin of the 2nd Law is to be found in low-entropy initial conditions rather than in a natural tendency of trajectories to increase in entropy, but you seem to agree with that, so I’m confused. There is no such natural tendency, since no matter what coarse-graining you choose, there is an equal number of trajectories that decrease their entropy and trajectories that increase their entropy. (Where the entropy of a state is defined by its macroscopic equivalence class under the coarse-graining, which is a perfectly sensible thing to do.)

    The issue of “who decides how we coarse-grain” is of course an interesting one, but I don’t think it’s directly relevant here. As a matter of practice, people do not choose weird coarse-grainings in which an ice cube melting in water decreases in entropy, although of course they could. In the coarse-graining that everyone actually uses, our early universe had a very low entropy — much lower than it needed to have, by any known criterion — and that’s a fact that needs to be explained by cosmology. I would personally bet that our notion of the “most useful” coarse-graining can be derived as a consequence of the Hamiltonian of the world, but I haven’t been following the research along those lines.

Comments are closed.

Scroll to Top