Latest Declamations about the Arrow of Time

Here are the slides from the physics colloquium I gave at UC Santa Cruz last week, entitled “Why is the Past Different from the Future? The Origin of the Universe and the Arrow of Time.” (Also in pdf.)

Time Colloquium

The real reason I’m sharing this with you is because this talk provoked one of the best responses I’ve ever received, which the provokee felt moved to share with me:

Finally, the magnitude of the entropy of the universe as a function of time is a very interesting problem for cosmology, but to suggest that a law of physics depends on it is sheer nonsense. Carroll’s statement that the second law owes its existence to cosmology is one of the dummest [sic] remarks I heard in any of our physics colloquia, apart from [redacted]’s earlier remarks about consciousness in quantum mechanics. I am astounded that physicists in the audience always listen politely to such nonsense. Afterwards, I had dinner with some graduate students who readily understood my objections, but Carroll remained adamant.

My powers of persuasion are apparently not always fully efficacious.

Also, that marvelous illustration of entropy in the bottom right of the above slide? Alan Guth’s office.

Update: Originally added as a comment, but I’m moving it up here–

The point of the “objection” is extremely simple, as is the reason why it is irrelevant. Suppose we had a thermodynamic system, described by certain macroscopic variables, not quite in equilibrium. Suppose further that we chose a random microstate compatible with the macroscopic variables (as you do, for example, in a numerical simulation). Then, following the evolution of that microstate into the future, it is overwhelmingly likely that the entropy will increase. Voila, we have “derived” the Second Law.

However, it is also overwhelmingly likely that evolving that microstate into the past will lead to an increase in entropy. Which is not true of the universe in which we live. So the above exercise, while it gets the right answer for the future, is not actually “right,” if what we care about is describing the real world. Which I do. If we want to understand the distribution function on microstates that is actually true, we need to impose a low-entropy condition in the past; there is no way to get it from purely time-symmetric assumptions.

Boltzmann’s H-theorem, while interesting and important, is even worse. It makes an assumption that is not true (molecular chaos) to reach a conclusion that is not true (the entropy is certain, not just likely, to increase toward the future — and also to the past).

The nice thing about stat mech is that almost any distribution function will work to derive the Second Law, as long as you don’t put some constraints on the future state. That’s why textbook stat mech does a perfectly good job without talking about the Big Bang. But if you want to describe why the Second Law actually works in the real world in which we actually live, cosmology inevitably comes into play.

75 Comments

75 thoughts on “Latest Declamations about the Arrow of Time”

  1. Jim Graber wrote:

    The above Stosszahlansatz doesn’t look obviously time-asymmetric to me. In fact, changing t to -t and interchanging primed and unprimed p variables seems to give a symmetrical result. What am I missing?

    Hey, it’s a fun puzzle! I don’t want to spoil it.

  2. Confused, isn’t any particular microstate in fixed (zero) entropy? The only way to get entropy to increase when evolving over time is to coarse-grain. I know the point you’re trying to explain, but I don’t agree with the explanation.

  3. I’m really fascinated by the arrow of time and its relation to gravity and cosmology. In the future I plan to study this more. In the past, too!

    So do I… 🙂

    Gravity dominates in large scales. What is the correct statistical mechanics of gravitating systems? A tricky and fascinating issue. For those interested, see section 4.2 of my paper, where a brief review is offered.

    http://arxiv.org/abs/astro-ph/0604544

    Best,
    Christine

  4. Dear John Baez —

    Thanks very much! for your attentive reply (#50) to my posted question (#44) about entropy in the early thermal universe. However, I can’t see how what you’ve written can be correct, and so I must still be missing something. You write:

    “Briefly: while most of the entropy of the early universe is in radiation and hot gases, and this stuff is close to thermal equilibrium if one neglects gravity, the early universe is very far from equilibrium if one takes gravity into account! Gravitating systems increase their entropy by clumping up and getting hotter! As the universe ages, this is what happens.”

    Of course, gravity was operating during the hot early universe; so if entropy can be increased in the presence of gravity by clumping, then why didn’t the hot (ie ultrarelativistic) early universe clump spontaneously? You might say that the thermal phase was so short that not much clumping could have happened, but I don’t think that gets at the essence of the question.

    Can radiation clump? I am but a Bear of Little Brain, but my intuition certainlys says no. Without the complication of an expanding Universe, suppose we just look at a photon gas in a box and let it evolve for an arbitrarily long time. Will it ever clump gravitationally? Photons can have their trajectories bent graviationally, but they are _never_ bound by any gravitational concentration (short of a black hole), and so they can’t “collect” in an over-density the way massive particles can. There are thermal fluctations in density in the photon gas, and concievably these could be amplified by gravity, but only — I would guess — to a very small degree. So I would expect that the long-term/final/equilibrium/maximum-entropy state of a photon gas is always quite close to being spread uniformly. What am I missing here?

    We can ask the question the other way: if we started with a clumpy photon gas would it, in the presence of gravity, spontaneously clump further or un-clump? If we just imagine the border between regions of different density, then since the photons are all unbound I would expect more to cross from high to low than vice versa, and so I would expect the spontaneous trend would be toward unclumping. And since anything that happens spontaneously should correspond to an increase in entropy, it seems to me that the photon gas should be at maximal entropy with minimal clumping. If I’m off the beam here, can you tell me how?

    Thanks; regards,

    Paul Stankus

  5. Paul– You’re largely on the right track. The clumping/non-clumping business does not actually have a cut-and-dried relationship to entropy. Some things (like matter) will clump in the right circumstances; others (like radiation) will not. It’s true that entropy increases as inhomogeneities grow in a matter-dominated universe, but the the full story is richer than that. (And we don’t understand it as yet.)

    The secret is that, in addition to clumpy vs. uniform, general relativity also allows for the overall density of particles to change as space expands or contracts. (Something that can’t happen with a non-gravitational box of gas.) The very early universe is extremely low entropy when gravity is taken into account. You know that even if you don’t have a rigorous definition of the entropy, because you know that the early universe quickly evolves into a very diiferent-looking state; truly high-entropy states are static. The entropy goes up as the universe expands, and the universe becomes clumpier. But the entropy continues to go up as the universe continues to expand, and eventually the tendency towards clumpiness reverses. Black holes, for example, eventually evaporate into the increasingly empty regions around them. The truly high-entropy configuration is just empty space, which is pretty stable.

  6. 57: ‘The truly high-entropy configuration is just empty space, which is pretty stable.’

    Which sort of begs the question, what is the mechanism by which quantum fluctuations in empty space can translate to the classical level and lead to a new space-time topology, pinched off from the the proposed future of our universe?

  7. But if the energy density become low enough, and the universe falls into “non-gravitational” state of uniform energy distribution, thermal equilibrium, and maximal entropy, then uncertainty is supposed to intervene, enabling the universe to regain low entropy due to the spontaneous formation of ordered matter that will inevitably occur as quantum mechanics lowers the random element in the behavior of matter.

    As the age of the universe approaches infinity the probability increases that a cherry 1968 Pontiac Firebird, will pop spontaneously into existence, and make mine a convertable, please… 😉

    Allegedly, all non paradoxical states will be attained as the age of the universe approaches infinity… yeah, right, as another flawed theory gets extended to reveal its inherent absurdity.

  8. Hi Sean —
    Thanks very much for your keen reply (#57) to my questions (#44, #56). I can tell you that I am very much in sympathy with your statement

    “You know that even if you don’t have a rigorous definition of the entropy, because you know that the early universe quickly evolves into a very diiferent-looking state; truly high-entropy states are static.”

    This is a principle I try to take advantage of all the time: maximum entropy is achieved at equilibrium, and equilibrium is when things look macroscopically static. The quick implication is that any kind of spontaneous macro evolution may/probably indicate/s that entropy is increasing. But this is not strictly the case: entropy is increasing only when the evolution is both _spontaneous_ and _irreversible_.

    The evolution of the ultrarelativistic thermal phase of the early universe, however, certainly _is_ reversible. You can see this just by imagining an old-fashioned closed Friedmann universe which contains only a photon gas: it expands, and then recollapses, and the two mirror each other. Entropy per co-moving volume is conserved, and so is the total entropy in the closed universe. It’s like a massive, ideal piston falling into a cylinder filled with zero-viscosity gas, or onto a perfect spring: it goes down and then comes up, thermal equilibrium is always maintained and the motion then reverses itself.

    I don’t have to demonstrate this reversibilty though, since you know we already assume it. The cornerstone assumption in describing the early thermal universe (cf Kolb and Turner) is that entropy per co-moving volume is conserved, and this alone means that the evolution is reversible. So even though I welcome your viewpoint generally, I stand by my original claim that the evolution of the ultrarelativistic thermal phase, including the presence of gravity controlling the overall expansion, is _not_ associated with an increase in entropy even though it is a spontaneous evolution. It’s not until matter domination that the universe evolves into a “different-looking state” and entropy then starts to increase.

    OK, over to you; regards,

    Paul

  9. Sure, I agree with all that. Of course it’s only approximate; more carefully, there are matter perturbations in the growing mode but not in the shrinking mode, violating reversibility. But it’s a pretty good approximation.

  10. Sean & Paul,

    The way I look at this is that the expansion opens up the phase space, so that what might have been equilibrium (in terms of the matter/radiation degrees of freedom) departs from equilibrium in the new, larger volume — even while the entropy increases. Thus we get non-equilibrium chemistry, for example.

    I think that if we were discussing *all* degrees of freedom — matter and gravitation — this would somehow correspond to ‘exciting’ new gravitational degrees of freedom that lay ‘dormant’ when the universe was smaller. Now consider the time-reverse. Certainly we can re-shrink the universe, but we can’t put the gravitational degree of freedom ‘back in the bottle’ rather, we find a very inhomogeneous matter and curvature distribution corresponding to higher entropy.

    Now all of these are just words because, as Sean noted, we don’t have any general understanding of how to understand the thermodynamics of gravity (nor I suspect, is it possible to neatly partiction the degrees of freedom between matter and gravity). But that’s the way I, at least, think about it.

  11. I keep seeing these bullhorn diagrams, like pages 6 and 7 of Sean’s presentation, and they give me a headache, so I seek clarification. Since the horns curve, we must be thinking of a dynamic situation. Here’s one:

    Consider a hunk of gas at time Tz recently subject to a disturbance, such as sudden removal of a membrane between two boxes of different gases at STP. Knowing classical physics, we compute the molecules’ spatial and momentum distributions at Tz. We run a computer simulation of the motions of N randomly selected molecules over time T, obtaining some resulting microdistribution. We can expect this to well represent the real-life macrostate at Tz + T, and we can therefore expect the entropy computed on the basis of our simulation results to fairly approximate actual entropy at Tz + T.

    Now, however, look at the computer run as a potential postdiction of the distribution at Tz – T. Given time symmetry of the laws of motion, this is a valid postdiction of a microdistribution at Tz – T that could have resulted in the Tz distribution. But, given that things are changing, it is unlikely to represent (be consistent with) the macrostate that really existed at Tz – T, and therefore it won’t give a correct entropy result.

    So why the heck would anybody take seriously such a flawed estimate of past entropy?

    Prediction POV: particles are as likely to follow simulation paths as they are to do anything else.

    Postdiction POV: particles can’t follow simulation paths from Tz because they can’t go backward in time. They are unlikely to follow the time-reversed paths from Tz – T because they are unlikely to start with the simulation result; we gave them a bad initial distribution.

    Maybe our assumptions can’t be time-symmetric because real particles (e.g. molecules) go only one direction in time.

  12. I’m having a serious problem grasping how we can dispense with nonlocality at will, like it is a sort of chimera you may append to the map as you wish. Having said that, the rest of you nerds have gone calculatingly over my head! I do know that gravity is the residue of time though….and all time IS ONE. I’ve experienced that, so take it from the horses mouth, duration is an illusion!

  13. I’d like to hear some commentary on the role of QM in reversibility, especially the non-reversible evolution of the wave function between “emission” and “detection.”

  14. Sean’s experience is a good example of why 99% of seminars are a waste of time. First, 98% are just boring. Sean’s talk belongs to the next 1% — interesting, and well-presented, but still there was the obligatory idiot in the audience who is not afraid to get up and expose his mind-boggling ignorance for all to see. I don’t think that Sean got anything at all from giving this seminar — the feedback consisted of sheer stupidity. The idiot in the audience is clearly beyond teaching. So what’s the point?

  15. Hi Sean —
    Picking up from #60 and #61, it seems that we can agree that for a radiation-only universe the smoothed-out state appears to be macroscopically static, and all its further evolution is basically reversible. So it appears, at first glance, to be an equilibrium/maximum-entropy state.

    The puzzle, then, is why the radiation-only universe has an entropy that is so much enormously lower than a matter-dominated universe of the same overall energy density. For radiation-only, the maximum entropy seems to be just that of the classical photon gas; for matter-dominated, the maximum entropy state is when all the matter is stuffed into black holes (plus a cool residual atmosphere). It’s as though the radiation-only universe would _like_ to cross over into being full of black holes, but there’s no “gateway” for doing so [short of some kind of mondo-high density fluctuation; but I’m not sure that this can even happen with photons since EM fields don’t like to be compressed].

    You can see this “gateway” idea even in simple examples. Take half a solar mass of iron atoms in a one-light-year-square box; what’s its entropy? If we think ahead to the final state of its evolution, it will be a warm white dwarf in absorption/evaporation equilibrium with an atmosphere. With ten solar masses in the same box, though, the final state is very different: a large white dwarf collapses into a neutron star (supernovae, essentially; type Ic?), which accretes until it shrinks into a black hole; the black hole eventually swallows all the matter and comes to equilibrium with a cool gas of radiation and maybe the odd electron or two. These two final states are of _enormously_ different entropy, though their initial conditions are quite similar. The 0.5 M_Sun case is not “through the gate”, but the 10 M_Sun case is; so S(E,V) is highly discontinuous over E.

    So this is my picture of our expanding universe: the initial radiation-dominated phase fills out to the maximum entropy that it can find, namely being spread very evenly. If there were no matter (or, more exactly, no finite chemical potential for a massive species) then this would be the equilibrium state for all time. But once the universe passes to being matter-dominated and matter can start to clump, the “gateway” is opened and entropy can start to increase, irreversibly, up to its “proper” black-hole-dominated value.

    With this in mind, the question that you and Sir Roger start with, namely “Why was the initial/early universe so low in entropy?” can be re-cast to “Why was the initial/early universe not ‘through the gate’?” or, why was it a radiation-dominated phase?

    Does this make sense? Let me know what you think.

    Best regards,

    Paul

    PS Related, but something of a side question: Non-crazy people talk about the possibility of real, live micro-black holes being created — soon! — in LHC collision if the universe really has large extra dimensions. If this were true, then wouldn’t the early universe have been thick with black holes whenever the temperature were above a TeV scale? Does this have any interesting implications for, say, baryogenesis in the thermal phase?

  16. just a thought here…

    im a layman and an idiot but this thought makes some sense to me.

    perhaps entropy increases toward the future due to the number of possibilites.

    there is only one possibility in the past. whe know what that is the choices are small and randomness has no effect, yet as we look to the future, it is chaotic and unpredictable due to the number of random decisions of individual entities and random motions of matter, that the entropy would surely increase.

    am i making any sense?

  17. ps: just so you know Aaron, “idiot” comes from the Greek, meaning “private person.”
    Me t00!
    the problem with quantum dynamics/physics is that there aren’t ENOUGH idiots…or maybe it’s just Feynman’s death, but i digress.

  18. Pingback: The Lopsided Universe | Cosmic Variance

  19. Sean wrote:

    The nice thing about stat mech is that almost any distribution function will work to derive the Second Law, as long as you don’t put some constraints on the future state. That’s why textbook stat mech does a perfectly good job without talking about the Big Bang. But if you want to describe why the Second Law actually works in the real world in which we actually live, cosmology inevitably comes into play.

    The explanation of entropy and the Second Law generally accepted by physicists and chemists relies on initial, not boundary conditions, and makes no assumptions about constraints on the future state, contrary to what you say–it is actually the other way around. It may not be self-contained, but it seems to be the most comprehensive one we have. It runs like this:

    1. The dynamical laws are indeed time symmetric, but they are to be supplemented by initial conditions. In real life, the initial conditions never correspond to pure states (e.g. accurately described by a wave function), they always correspond to clusters of states and are described by well-behaved probabilistic measures and density matrices. Statistically all ‘impure’ states that can be prepared result in the needed time arrow. It would be an improbable statistictal fit to prepare a fine-tuned state resulting in the reduction of entropy for a closed system (reverse time arrow: “evolving that microstate into the past will lead to an increase in entropy”).

    So there is no need “to impose a low-entropy condition in the past”; what is imposed is any reasonable initial condition without any reference to its entropy.

    2. By its very nature, entropy can be well defined only locally and can encapsulate only short range interactions (like intermolecular forces). Gravity and all other long range and global (T symmetry violating) interactions cannot be made fit completely into the entropy framework and should be treated at least partially dynamically as external forces, space-time curvature, or something like that.

    Why partially and not fully? This applies primarily to electromagnetic forces because they can induce considerable matter fields of statistical nature that can and should be treated thermodynamically. I can think of no comparable situation (perhaps black holes?) for gravity; however, I can recollect that Zel’dovich and Novikov in one of their books derive internal pressure caused by fluctuations of gravity in self-gravitating systems like dust clouds or large star clusters.

    P.S. I know that Hawking thinks that gravitational entropy is a global quantity and should not be localized, but I am not convinced.

    3. Equilibrium thermodynamics is not a good framework at all for the universe. Think in terms of irreversible thermodynamics (IT), entropy density rather than entropy, the local entropy production, processes, fluxes, etc. IT is a thoroughly local theory that does not need a deus ex machina in the form of cosmology and it is applicable perfectly well to the universe.

    I don’t see any reasons why we have to revise these fundamentals. What I sense is a confusion caused by an illegitimate attempt to extend thermodynamics to the entire universe.

  20. Michael Cleveland

    An alternative interpretation of the Arrow:

    The arrow of time is implicit in the Lorentz-Fitzgerald-Einstein (LFE) contractions, and it‘s curious no one seems to have noticed. Time and motion are complements. Any change of position in space corresponds to a change of position in time; the temporal change always in one direction only, from present to subsequent present, regardless of the direction of change in space, its variables subject only to the degree of motion (i.e., speed). The relationship between motion and time is given by the complementary ratios

    (v/c)^2+(t_0/t)^2= 1

    derived from the LFE time dilation equation

    t=t_0/?(1-(v/c)^2 )

    Definitions

    A relative inertial rest frame is implicit in any reference herein to speed, velocity or motion.

    If a traveler takes a long trip, departing and returning at high average relativistic speed, he returns at a time x years in the future relative to his starting frame of reference at a cost of less than x years subjective time. This variable subjective interval can be reasonably described as travel forward in time or accelerated movement through time. Because this subjective cost is variable with spatial velocity, we will use the term “speed through time” for the sake of descriptive simplicity. Also, for simplicity, we will use the term “orientation in time” to describe the source of this variable temporal speed.

    All motion can be described in terms of v/c. Every value for v/c has a corresponding specific orientation in time, and a calculable speed through time (in terms of subjective interval).

    The range of possible values for v/c from an asymptotic approach toward hypothetical absolute zero motion to the asymptotic approach to c encompasses all possible motion. The ratio v/c cannot have a negative value, so there is no possible negative value for t–no possible negative interval (so long as v/c cannot be greater than 1). The arrow of time, therefore, is implicit, and founded in the fundamental geometry of space-time. This is more easily intuitive if we note that the same is true for length and mass: in the corresponding LFE transformations, the requisite positive value for v/c limits the dimensions for mass or length to positive values (which we already accept intuitively). Hence one could say that the arrow of time is implicit in the length of the telephone pole outside your window or the heft of the keys in your pocket.

    Duration

    Because motion and time form an ontological whole, and because all frames of reference are in motion in relation to some other frames, and all motion has a calculable positive value for t, then it is simpler and more reasonable to view duration as natural motion through time, as a complement to spatial motion, than to view past, present, and future as co-existent from some higher dimensional frame (which requires subsequently higher frames to provide for duration of structure–ad infinitum). Three-dimensional objects are just that, but they “endure” because they move with the observer through space and time. They don’t “have” a fourth dimension; they move through the fourth dimension.

    This has interesting implications. There is no past, and there is no pre-existing future, though the future can be treated as a destination, whereas the past, as a corporeal structure at least, does not exist at all. The Universe exists in a constantly moving local “present,” an infinitesimally thin 3-dimensional surface moving with a 4th dimensional vector. Memory, the physical state of the world, and all other record of the past exist as the effect of prior causative motion and interaction, which have both occured at and lead to the edge of the Universe, which is always “now.” Entropy is a marker, not a cause. And good-bye and good riddance to block time.

  21. Michael Cleveland

    I shouldn’t try to do these things in the wee hours… Disregard the reference to negative v/c. The idea is correct; the verbiage is wrong. The only way to generate a negative interval is to achieve v/c > 1.

  22. I’m having a painful time trying to edit (with more Sean Carroll citations) a refereed conference paper which is more like a meandering blog thread orbull session about “arrow of time” theories by Bell, Gribbin, Laflamme, Hawking, Gold, Gell-Mann, Hartle, Hoyle, Bondi than a real paper. If I put it as is on arXiv, I’ll get some feedback, and at the expense of trashing my own dubious reputation and dragging down that of my full professor co-author. The blog thread in which this comment is embedded is fascinating. I especially like what John Baez and Blake Stacey bring to the table.

    J. Loschmidt, Sitzungsber. Kais. Akad. Wiss. Wien, Math. Naturwiss. Classe 73, 128–142 (1876)

    Wikipedia (on Loschmidt’s paradox) mentions: “One approach to handling Loschmidt’s paradox is the fluctuation theorem, proved by Denis Evans and Debra Searles, which gives a numerical estimate of the probability that a system away from equilibrium will have a certain change in entropy over a certain amount of time. The theorem is proved with the exact time reversible dynamical equations of motion and the Axiom of Causality. The fluctuation theorem is proved utilizing the fact that dynamics is time reversible. Quantitative predictions of this theorem have been confirmed in laboratory experiments at the Australian National University conducted by Edith M. Sevick et al. using optical tweezers apparatus.”

    Time and Classical and Quantum Mechanics and the Arrow of Time

    Philip Vos Fellman, Southern New Hampshire University

    Jonathan Vos Post, [at the time] Woodbury University

    About four years ago, Jonathan Vos Post and I began working on an economics project regarding competitive intelligence and, more broadly information theory. This led us in what we think are interesting and occasionally novel areas of research. One aspect of this research led us in the direction of game theory, particularly the evolving research on the Nash Equilibrium, polytope computation and non-linear and quantum computing architectures.1 Another area where we found a rich body of emerging theory was evolutionary economics, particularly those aspects of the discipline which make use of the NK Boolean rugged fitness landscape as developed by Stuart Kauffman and the application of statistical mechanics to problems of economic theory, particularly clustered volatility as developed by J. Doyne Farmer and his colleagues at the Santa Fe Institute.2 Some of you may have heard us speak on these subjects at the recent International Conference on Complex Systems held in Boston.3 Other dimensions of this work come out of Jonathan’s experience as a professor of physics, astronomy and mathematics and his long association with Richard Feynman.

    In thinking about information theory at the quantum mechanical level, our discussion, largely confined to Jonathan’s back yard, often centers about intriguing but rather abstract conjectures. My personal favorite, an oddball twist on some of the experiments connected to Bell’s theorem, is the question, “is the information contained by a pair of entangled particles conserved if one or both of the particles crosses the event horizon of a black hole?”

    It is in that last context, and in our related speculation about some of the characteristics of what might eventually become part of a quantum mechanical explanation of information theory that we first encountered the extraordinary work of Peter Lynds.4 This work has been reviewed elsewhere, and like all novel ideas, there are people who love it and people who hate it. One of the main purposes in having Peter here is to let this audience get acquainted with his theory first-hand rather than through an interpretation or argument made by someone else. In this regard, I’m not going to be either summarizing his arguments or providing a treatment based upon the close reading of his text. Rather, I will mention some areas of physics where, to borrow a phrase from Conan-Doyle, it may be an error to theorize in advance of the facts. In particular, I should like to bring the discussion to bear upon various arguments concerning “the arrow of time.” In so doing, I will play the skeptic, if not the downright “Devil’s Advocate” (perhaps Maxwell’s Demon’s advocate would be more precise) and simply question why we might not be convinced that there is an “arrow” of time at all.

    Before I do this, however, I am going to cheat a bit and give you Peter’s abstract, in order to differentiate between some of the conventional notions of time as consisting of instants, and Peter’s explanation of time as intervals:5

    Time enters mechanics as a measure of interval, relative to the clock completing the measurement. Conversely, although it is generally not realized, in all cases a time value indicates an interval of time, rather than a precise static instant in time at which the relative position of a body in relative motion or a specific physical magnitude would theoretically be precisely determined. For example, if two separate events are measured to take place at either 1 hour or 10.00 seconds, these two values indicate the events occurred during the time intervals of 1 and 1.99999.hours and 10.00 and 10.0099999.seconds, respectively. If a time measurement is made smaller and more accurate, the value comes closer to an accurate measure of an interval in time and the corresponding parameter and boundary of a specific physical magnitudes potential measurement during that interval, whether it be relative position, momentum, energy or other. Regardless of how small and accurate the value is made however, it cannot indicate a precise static instant in time at which a value would theoretically be precisely determined, because there is not a precise static instant in time underlying a dynamical physical process. If there were, all physical continuity, including motion and variation in all physical magnitudes would not be possible, as they would be frozen static at that precise instant, remaining that way. Subsequently, at no time is the relative position of a body in relative motion or a physical magnitude precisely determined, whether during a measured time interval, however small, or at a precise static instant in time, as at no time is it not constantly changing and undetermined. Thus, it is exactly due to there not being a precise static instant in time underlying a dynamical physical process, and the relative motion of body in relative motion or a physical magnitude not being precisely determined at any time, that motion and variation in physical magnitudes is possible: there is a necessary trade off of all precisely determined physical values at a time, for their continuity through time.

    Having said this, let us now turn to some familiar, and perhaps some not so familiar arguments about “the arrow of time”. The first idea which I’d like to review comes from an article by John Gribbin on time travel, “Quantum time waits for no cosmos”. In his opening statement, Gribbin cites Laflamme, a student of Stephen Hawking:

    The intriguing notion that time might run backwards when the Universe collapses has run into difficulties. Raymond Laflamme, of the Los Alamos National Laboratory in New Mexico, has carried out a new calculation which suggests that the Universe cannot start out uniform, go through a cycle of expansion and collapse, and end up in a uniform state. It could start out disordered, expand, and then collapse back into disorder. But, since the COBE data show that our Universe was born in a smooth and uniform state, this symmetric possibility cannot be applied to the real Universe.

    Gribben summarizes the arrow of time concept by noting:

    Physicists have long puzzled over the fact that two distinct “arrows of time” both point in the same direction. In the everyday world, things wear out—cups fall from tables and break, but broken cups never re- assemble themselves spontaneously. In the expanding Universe at large, the future is the direction of time in which galaxies are further apart.

    Many years ago, Thomas Gold suggested that these two arrows might be linked. That would mean that if and when the expansion of the Universe were to reverse, then the everyday arrow of time would also reverse, with broken cups re-assembling themselves.

    He then goes on to briefly summarize the “big crunch” theory of universal expansion and contraction, citing a version presented by Murray Gell-Mann and James Hartle. It is here that his account gets into trouble, on a theoretical basis because if Peter Lynds is correct in asserting that time does not flow, then the “arrow of time” is a purely subjective quantity flowing from neurobiological activity (as he indeed argues in “Subjective Perception of Time and a Progressive Present Moment: The Neurobiological Key to Unlocking Consciousness”). Empirically, while recent evidence appears to indicate that local inhomogeneities would prevent the temporal symmetry suggested by Gell-Mann and Hartle, there are a number of deeper issues whose empirical resolution is simply far beyond our present grasp.

    One area which raises some fairly troubling questions for any theory of universal temporal symmetry is the question of whether in the universe we inhabit, presently known physical constants have always had the values which we recognize today. A particularly intriguing argument has been recently advanced in this regard, claiming that the speed of light must have been greater during the very early history of the universe. Unfortunately, the experimental findings associated with that claim have not been duplicated by any other investigators, which casts rather serious doubts upon the validity of the claim.6 With better data on the fine structure of the universe, it may be possible to say something more meaningful in this regard.

    Central to the entire stream of reasoning about temporal symmetry and an arrow of time is the question, “what is the early history of the universe?” Specifically this refers to the first 300,000 years or so after the big bang, which as an empirical matter, we will not be able to address until we have the technology necessary to build at least the next generation of gravity wave detectors. No EM detectors can tell us anything useful about the first 300,000 years of the universe’s history, because baryons and photons had not yet uncoupled and the universe was opaque to electromagnetic radiation. Ancillary questions, like “do stars precede galaxies or galaxies precede stars?” also have a significant bearing on the thermodynamic evolution of the universe.

    A better state of empirical knowledge about the time evolution of the system at the end of the universe would also be required to make any deeply meaningful arguments about an arrow of time and its reversal. For example, there’s what Jonathan refers to as the “Maxwell’s Demon Family Problem.” By this, we mean that at the end of the lifetime of the universe, there may be some unexpected emergent phenomena which would cause the time evolution of the system to behave in unexpected ways. Specifically, there might be emergent mechanisms for the transmission of information over increasing (rescaled) ranges of space-time. In this case, the (entropy production at) the event horizon of the universe (as a whole) becomes a significant contributor to the thermodynamic evolution of the system. As Freeman Dyson argues, even as the radius of the universe approaches infinity and the density approaches zero its temperature does not approach zero, and therefore the nature of the “struggle” between entropy and order in a “big crunch” may be characterized by entirely different time evolutions of the system than those with which we are familiar.

    At the micro-level, if one looks at a small number of particles in a closed system, then there are other complications which challenge the validity of the entire “arrow of time” concept. As an analogy, we could think about randomly shuffling a deck of cards. The time evolution of the system is such that in far less time than one needs to exhaust even local sources of energy, any configuration of the cards can be duplicated with a probabilistic certainty of one. Over longer ranges, asymptotic limits and dimensionality become important. For example, with a random walk in one dimension, the state of the system returns to the origin with infinite frequency as t??. There are no reflecting barriers, this is just a function of probability. In a random walk on a two dimensional lattice, the randomly walking point will return to the origin with a probability of 1 with an expected time of ?. In three dimensions, the probability is roughly 1/3 that the random walk returns to the origin, and as the dimensionality increases, the probability of returning to the origin even over a random walk of infinite length converges on zero as a limit. What this kind of exercise tells, is that the nature of the statistical approach taken to model various dynamical processes, will constrain the kinds of solutions which will appear.7

    Another problem with attempting to pin down a definitive arrow of time arises from the intrinsic characteristics of certain fairly common time series distributions. In economics we see this problem in clustered volatility. In a more general sense, any over-arching statements about an arrow of time would have to be able to satisfactorily explain heteroskedastic behavior, particularly at those troublesome beginning and end-points of the universe as an evolving dynamical system. In order to encompass this kind of behavior, one must be able to incorporate non-extensive statistical mechanics, which may or may not allow recovery of the standard Boltzmann expression. At a deeper level, it is likely that one has to deal with a series of non-commuting quantum operators. In this context not only do we lack the data necessary and sufficient for the drawing of conclusions about the arrow of time, but it is unclear whether our present methodology would support the “arrow of time” concept even if we had the data, should that data prove to be heteroskedastic in its distribution.

    To better understand the implications of heteroskedasticity, we can look again at the very early history of the universe (although the actual problem is that, at the moment, we cannot look there, but for the sake of argument, we will posit a hypothetical early evolution of the system). If any universal constants have changed since that time (something which we cannot know at the present time), those changes may have been both non-linear and heteroskedastic. In such a case, we might wish to say that the arrow of time then becomes ill-defined. Systems may progress through time evolutions which no longer represent extensive thermodynamics and the directionality of any so-called arrow of time is no longer clear. This situation is then complicated by the fact that the universe has undergone several phase transitions by symmetry breaking. As a result additional forces have emerged in each of these transitions. First, gravity separated out of the other forces, and it is for that reason that gravity wave detectors will be able to probe farther back in time than any other technology. Subsequently electromagnetic weak and strong nuclear forces separated. Not only does the early history of the universe matter a great deal as to whether there is, in fact an “arrow of time” with attendant temporal symmetry, but in the context of emergent properties, we cannot say for certain that there is no additional, (i.e., a fifth) force which might separate out during the future evolution of the universe.

    In the same way, heteroskedastic behavior at the end of the lifetime of the universe might lead the system through what are presently considered to be extremely low probability states, where the concentration of “anti-entropic” behavior might be exceptionally high. While we have no a priori statistical basis for expecting such mechanics, the very existence of heteroskedasticity cautions us that we cannot rule such behavior out. With respect to phase transitions, the situation is even more complicated because with phase transitions, there are well known behavioral regularities associated with the state prior to the transition, and an entirely different set of behavioral regularities associated with the post transition state. In between, of course, are the critical phenomena. However, phrase transitions are hardly ever mentioned in the context of an “arrow of time”, because once again from the “arrow of time” perspective, these distributions are extraordinarily ill-behaved and the so-called “arrow” itself becomes ill-defined.

    In closing, refinement of the big bang theory in recent decades (i.e., by Hoyle, Bondi and Gold) poses a number of deep challenges to the “arrow of time” metaphor. Present theory posits an initial state where the universe was very small and the constants very large with an expansion of several orders of magnitude taking place over a relatively brief period of time (less than a second). Over the next ten billion years, the universe expanded but with a slight deceleration due to gravity. Then, some one to two billion years ago the expansion of the universe began to accelerate again, and we do not know why. Is this heteroskedasticity? Is it a function of some kind of “arrow of time”? Nobody knows. Given our present state of cosmological ignorance, it would be, to say the least, premature, to accept any generalized arguments about “the arrow of time”.

  23. Michael Cleveland

    I’ve restated this more correctly and succinctly since I find that I’m better at translating ideas into words in the light of day than in the small hours of the night. My apologies for dragging this out, but it’s an idea that seems to have been overlooked in the discussions of time and the arrow of time, and it should at least be stated correctly so it can be evaluated on whatever merit or lack thereof it may have.

    The arrow of time is implicit in the Lorentz-Fitzgerald-Einstein (LFE) contractions. Time and motion are complements. Any change of position in space corresponds to a change of position in time; the temporal change always in one direction only, from present to subsequent present, regardless of the direction of motion in space, its variables subject only to the degree of motion (i.e., speed). The relationship between motion and time is given by the complementary ratios

    (v/c)^2+(t_0/t)^2= 1

    from the LFE time dilation equation

    t=t_0/?(1-(v/c)^2 )

    * * * * *

    0< v/c <1

    While it might sometimes be awkward, all motion can be expressed in terms of v/c and the range of possible values for v/c encompasses all possible motion.

    Every value for v/c is associated with a unique positive (present toward future) temporal interval t (in relation to a designated rest frame interval t_0). There is no possible t that is not defined by some value of v/c . Since v/c cannot be greater than 1, there is no possible negative interval, so no possible negative time. Hence the arrow of time and its asymmetry are defined by the fundamental relationship between motion and time, and the science fiction concept of backward travel in time is dead (alas–a fate far worse for some of us than the consequences of murdering one’s own Grandfather).

    Duration

    Because motion and time are inseparably connected, and because all frames of reference are in motion in relation to some other frames, and all motion creates a calculable positive value for the interval t, then it is simpler and more reasonable to view duration as natural motion through time, as a complement to spatial motion, than to view past, present, and future as co-existent from some higher dimensional frame which requires still higher frames to provide for duration of structure–-ad infinitum. Three-dimensional objects are just that, but they “endure” because they move with the observer through space and time. They don’t “have” a fourth dimension; they move through (along?) the fourth dimension.

    This has interesting implications. There is no past, and there is no pre-existing future, though the future can be treated as a destination, whereas the past, as a corporeal structure, does not exist at all. The Universe exists in a constantly moving local “present,” an infinitesimally thin 3-dimensional surface with a 4th dimensional vector. All record of the past exists as the effect of prior causative motion and interaction which cumulatively form the present (and fleeting) state of the Universe. It might be valid to describe “now” as the edge of the universe, but it’s probably more accurate to state that “now” is the Universe (and there is nothing in that statement that is negated by issues of relativity and simultaneity–in fact the “now” Universe fits perfectly into those problems). If this is true, then Entropy is a motion-related marker, not a cause, and Block Time is a myth.

    When I wrote the original post the other night, I confess that I was deeply fascinated and distracted by the impossibility of negative values for length, mass, and motion (speed, to avoid confusion over negative vectors). I’m afraid that fascination, combined with a considerable degree of fatigue, translated into a rather illogical misstatement. I hope this puts it more clearly.

Comments are closed.

Scroll to Top