A recent post of Jen-Luc’s reminded me of Huw Price and his work on temporal asymmetry. The problem of the arrow of time — why is the past different from the future, or equivalently, why was the entropy in the early universe so much smaller than it could have been? — has attracted physicists’ attention (although not as much as it might have) ever since Boltzmann explained the statistical origin of entropy over a hundred years ago. It’s a deceptively easy problem to state, and correspondingly difficult to address, largely because the difference between the past and the future is so deeply ingrained in our understanding of the world that it’s too easy to beg the question by somehow assuming temporal asymmetry in one’s purported explanation thereof. Price, an Australian philosopher of science, has made a specialty of uncovering the hidden assumptions in the work of numerous cosmologists on the problem. Boltzmann himself managed to avoid such pitfalls, proposing an origin for the arrow of time that did not secretly assume any sort of temporal asymmetry. He did, however, invoke the anthropic principle — probably one of the earliest examples of the use of anthropic reasoning to help explain a purportedly-finely-tuned feature of our observable universe. But Boltzmann’s anthropic explanation for the arrow of time does not, as it turns out, actually work, and it provides an interesting cautionary tale for modern physicists who are tempted to travel down that same road.

The Second Law of Thermodynamics — the entropy of a closed system will not spontaneously decrease — was understood well before Boltzmann. But it was a phenomenological statement about the behavior of gasses, lacking a deeper interpretation in terms of the microscopic behavior of matter. That’s what Boltzmann provided. Pre-Boltzmann, entropy was thought of as a measure of the *uselessness* of arrangements of energy. If all of the gas in a certain box happens to be located in one half of the box, we can extract useful work from it by letting it leak into the other half — that’s low entropy. If the gas is already spread uniformly throughout the box, anything we could do to it would cost us energy — that’s high entropy. The Second Law tells us that the universe is winding down to a state of maximum uselessness.

Boltzmann suggested that the entropy was really counting the number of ways we could arrange the components of a system (atoms or whatever) so that it really didn’t matter. That is, the number of different microscopic states that were macroscopically indistinguishable. (If you’re worried that “indistinguishable” is in the eye of the beholder, you have every right to be, but that’s a separate puzzle.) There are far fewer ways for the molecules of air in a box to arrange themselves exclusively on one side than there are for the molecules to spread out throughout the entire volume; the entropy is therefore much higher in the latter case than the former. With this understanding, Boltzmann was able to “derive” the Second Law in a statistical sense — roughly, there are simply far more ways to be high-entropy than to be low-entropy, so it’s no surprise that low-entropy states will spontaneously evolve into high-entropy ones, but not vice-versa. (Promoting this sensible statement into a rigorous result is a lot harder than it looks, and debates about Boltzmann’s *H*-theorem continue merrily to this day.)

Boltzmann’s understanding led to both a deep puzzle and an unexpected consequence. The microscopic definition explained why entropy would tend to increase, but didn’t offer any insight into why it was so low in the first place. Suddenly, a thermodynamics problem became a puzzle for cosmology: why did the early universe have such a low entropy? Over and over, physicists have proposed one or another argument for why a low-entropy initial condition is somehow “natural” at early times. Of course, the *definition* of “early” is “low-entropy”! That is, given a change in entropy from one end of time to the other, we would always define the direction of lower entropy to be the past, and higher entropy to be the future. (Another fascinating but separate issue — the process of “remembering” involves establishing correlations that inevitably increase the entropy, so the direction of time that we remember [and therefore label “the past”] is always the lower-entropy direction.) The real puzzle is why there is such a change — why are conditions at one end of time so dramatically different from those at the other? If we do not assume temporal asymmetry *a priori*, it is impossible in principle to answer this question by suggesting why a certain initial condition is “natural” — without temporal aymmetry, the same condition would be equally natural at *late* times. Nevertheless, very smart people make this mistake over and over, leading Price to emphasize what he calls the Double Standard Principle: any purportedly natural initial condition for the universe would be equally natural as a final condition.

The unexpected consequence of Boltzmann’s microscopic definition of entropy is that the Second Law is not iron-clad — it only holds statistically. In a box filled with uniformly-distributed air molecules, random motions will occasionally (although very rarely) bring them all to one side of the box. It is a traditional undergraduate physics problem to calculate how often this is likely to happen in a typical classroom-sized box; reasurringly, the air is likely to be nice and uniform for a period much much much longer than the age of the observable universe.

Faced with the deep puzzle of why the early universe had a low entropy, Boltzmann hit on the bright idea of taking advantage of the statistical nature of the Second Law. Instead of a box of gas, think of the whole universe. Imagine that it is in thermal equilibrium, the state in which the entropy is as large as possible. By construction the entropy can’t possibly increase, but it will tend to fluctuate, every so often diminishing just a bit and then returning to its maximum. We can even calculate how likely the fluctuations are; larger downward fluctuations of the entropy are much (exponentially) less likely than smaller ones. But eventually every kind of fluctuation will happen.

You can see where this is going: maybe our universe is in the midst of a fluctuation away from its typical state of equilibrium. The low entropy of the early universe, in other words, might just be a statistical accident, the kind of thing that happens every now and then. On the diagram, we are imagining that we live either at point A or point B, in the midst of the entropy evolving between a small value and its maximum. It’s worth emphasizing that A and B are utterly indistinguishable. People living in A would call the direction to the left on the diagram “the past,” since that’s the region of lower entropy; people living at B, meanwhile, would call the direction to the right “the past.”

During the overwhelming majority of such a universe’s history, there is no entropy gradient at all — everything just sits there in a tranquil equilibrium. So why should we find ourselves living in those extremely rare bits where things are evolving through a fluctuation? The same reason why we find ourselves living in a relatively pleasant planetary atmosphere, rather than the forbiddingly dilute cold of intergalactic space, even though there’s much more of the latter than the former — because that’s where we can live. Here Boltzmann makes an unambiguously anthropic move. There exists, he posits, a much bigger universe than we can see; a multiverse, if you will, although it extends through time rather than in pockets scattered through space. Much of that universe is inhospitable to life, in a very basic way that doesn’t depend on the neutron-proton mass difference or other minutiae of particle physics. Nothing worthy of being called “life” can possibly exist in thermal equilibrium, where conditions are thoroughly static and boring. Life requires motion and evolution, riding the wave of increasing entropy. But, Boltzmann reasons, because of occasional fluctuations there will always be some points in time where the entropy is temporarily evolving (there is an entropy gradient), allowing for the existence of life — we can live there, and that’s what matters.

Here is where, like it or not, we have to think carefully about what anthropic reasoning can and cannot buy us. On the one hand, Boltzmann’s fluctuations of entropy around equilibrium allow for the existence of dynamical regions, where the entropy is (just by chance) in the midst of evolving to or from a low-entropy minimum. And we could certainly live in one of those regions — nothing problematic about that. The fact that we can’t directly *see* the far past (before the big bang) or the far future in such a scenario seems to me to be quite beside the point. There is almost certainly a lot of universe out there that we can’t see; light moves at a finite speed, and the surface of last scattering is opaque, so there is literally a screen around us past which we can’t see. Maybe all of the unobserved universe is just like the observed bit, but maybe not; it would seem the height of hubris to assume that everything we don’t see must be just like what we do. Boltzmann’s *goal* is perfectly reasonable: to describe a history of the universe on ultra-large scales that is on the one hand perfectly natural and not finely-tuned, and on the other features patches that look just like what we see.

But, having taken a bite of the apple, we have no choice but to swallow. If the only thing that one’s multiverse does is to *allow* for regions that resemble our observed universe, we haven’t accomplished anything; it would have been just as sensible to simply posit that our universe looks the way it does, and that’s the end of it. We haven’t truly *explained* any of the features we observed, simply provided a context in which they can exist; but it would have been just as acceptable to say “that’s the way it is” and stop there. If the anthropic move is to be meaningful, we have to go further, and explain why within this ensemble it makes sense to observe the conditions we do. In other words, we have to make some *conditional* predictions: given that our observable universe exhibits property *X* (like “substantial entropy gradient”), what other properties *Y* should we expect to measure, given the characteristics of the ensemble as a whole?

And this is where Boltzmann’s program crashes and burns. (In a way that is ominous for similar attempts to understand the cosmological constant, but that’s for another day.) Let’s posit that the universe is typically in thermal equilibrium, with occasional fluctuations down to low-entropy states, and that we live in the midst of one of those fluctuations because that’s the only place hospitable to life. What follows?

The most basic problem has been colorfully labeled “Boltzmann’s Brain” by Albrecht and Sorbo. Remember that the low-entropy fluctuations we are talking about are incredibly rare, and the lower the entropy goes, the rarer they are. If it almost never happens that the air molecules in a room all randomly zip to one half, it is just as unlikely (although still inevitable, given enough time) that, given that they did end up in half, they will continue on to collect in one *quarter* of the room. On the diagram above, points like C are overwhelmingly more common than points like A or B. So if we are explaining our low-entropy universe by appealing to the anthropic criterion that it must be possible for intelligent life to exist, quite a strong prediction follows: we should find ourselves in the *minimum possible* entropy fluctuation consistent with life’s existence.

And that minimum fluctuation would be a “Boltzmann Brain.” Out of the background thermal equilibrium, a fluctuation randomly appears that collects some degrees of freedom into the form of a conscious brain, with just enough sensory apparatus to look around and say “Hey! I exist!”, before dissolving back into the equilibrated ooze.

You might object that such a fluctuation is very rare, and indeed it is. But so would be a fluctuation into our whole universe — in fact, quite a bit more rare. The momentary decrease in entropy required to produce such a brain is fantastically less than that required to make our whole universe. Within the infinite ensemble envisioned by Boltzmann, the overwhelming majority of brains will find themselves disembodied and alone, not happily ensconsed in a warm and welcoming universe filled with other souls. (You know, like ours.)

This is the general thrust of argument with which many anthropic claims run into trouble. Our observed universe has something like a hundred billion galaxies with something like a hundred billion stars each. That’s an extremely expansive and profligate universe, if its features are constrained solely by the demand that we exist. Very roughly speaking, anthropic arguments would be more persuasive if our universe was minimally constructed to allow for our existence; e.g. if the vacuum energy were small enough to allow for a single galaxy to arise out of a really rare density fluctuation. Instead we have a hundred billion such galaxies, not to count all of those outside our Hubble radius — an embarassment of riches, really.

But, returning to Boltzmann, it gets worse, in an interesting and profound way. Let’s put aside the Brain argument for a moment, and insist for some reason that our universe did fluctuate somehow into the kind of state in which we currently find ourselves. That is, here we are, with all of our knowledge of the past, and our observations indicating a certain history of the observable cosmos. But, to be fair, we don’t have detailed knowledge of the *microstate* corresponding to this universe — the position and momentum of each and every particle within our past light cone. Rather, we know some gross features of the macrostate, in which individual atoms can be safely re-arranged without our noticing anything.

Now we can ask: assuming that we got to this macrostate via some fluctuation out of thermal equilibrium, what kind of trajectory is likely to have gotten us here? Sure, we *think* that the universe was smaller and smoother in the past, galaxies evolved gradually from tiny density perturbations, etc. But what we actually have access to are the positions and momenta of the photons that are currently reaching our telescopes. And the fact is, given all of the possible past histories of the universe consistent with those photons reaching us, in the vast majority of them the impression that we are observing an even-lower-entropy past is an *accident*. If all pasts consistent with our current macrostate are equally likely, there are many more in which the past was a chaotic mess, in which a vast conspiracy gave rise to our false impression that the past was orderly. In other words, if we ask “What kind of early universe tends to naturally evolve into what we see?”, the answer is the ordinary smooth and low-entropy Big Bang. But here we are asking “What do most of the states that could possibly evolve into our current universe look like?”, and the answer there is a chaotic high-entropy mess.

Of course, nobody in their right minds believes that we really did pop out of a chaotic mess into a finely-tuned state with false memories about the Big Bang (although young-Earth creationists do believe that things were arranged by God to trick us into thinking that the universe is much older than it really is, which seems about as plausible). We assume instead that our apparent memories are basically reliable, which is a necessary assumption to make sensible statements of any form. Boltzmann’s scenario just doesn’t quite fit together, unfortunately.

Price’s conclusion from all this (pdf) is that we should take seriously the Gold universe, in which there is a low-entropy future collapsing state that mirrors our low-entropy Big Bang in the past. It’s an uncomfortable answer, as nobody knows any reason why there should be low-entropy boundary conditions in both the past and the future, which would involve an absurd amount of fine-tuning of our particular microstate at every instant of time. (Not to mention that the universe shows no sign of wanting to recollapse.) The loophole that Price and many other people (quite understandably) overlook is that the Big Bang need not be the true beginning of the universe. If the Bang was a localized baby universe in a larger background spacetime, as Jennie Chen and I have suggested (paper here), we can comply with the Double Standard Princple by having *high*-entropy conditions in both the far past and the far future. That doesn’t mean that we have completely avoided the problem that doomed Boltzmann’s idea; it is still necessary to show that baby universes would most often look like what we see around us, rather than (for example) much smaller spaces with only one galaxy each. And this whole “baby universe” idea is, shall we say, a mite speculative. But explaining the difference in entropy between the past and future is at least as fundamental, if not more so, as explaining the horizon and flatness problems with which cosmologists are so enamored. If we’re going to presume to talk sensibly and scientifically about the entire history of the universe, we have to take Boltzmann’s legacy seriously.

But how many of Boltzmann’s brains worry about the anthropic principle?

I’ve never been convinced that there’s a sensible definition of “the entropy of the universe”, but I’ve never gotten very far with that argument with anyone.

Yeah, it sort of seems like shorthand for “variables we don’t know exactly how to manipulate” go here…a cosmic trashcan that appears to empty!

why can’t we have both low and high entropic conditions at the same time? i dunno…

all we’re doing is saying that the timespacewave function is only a portion of the resonance of the multiverses, and you are positing static variables in a dynamic equilibrium function. DNC

I don’t understand anything in this post. I think whatever fist-sized hunk of brain that is used for understanding this post is either a) missing from my head, or b) holding all of my excitement for Snakes On a Plane, as well as my hatred for the Green Lantern.

While reading Sean’s post, I had a thought similar to Aaron Bergman’s. I can imagine what it means for all the gas molecules in a box to be evenly distributed, or for them to be in one half of a box. I’m not sure what it would mean for the universe to be in a state of thermal equilibrium. However, that may be because I’m not trained in these concepts (or because random fluctuations of the universe don’t happen to have crafted for me a brain that’s adequate for the task).

Sean, very neat post!

From the “NOW” moment back to the start-point, Big-Bang, there is a specific (thermal) distance, is this equal to the “now” moment of a theorized big-crunch, or Heat-Death of the Universe? .. just which side of the ‘Bang’ are we at?

For some reason reading this I am reminded of these duplicate Hubble bubbles that people discuss with parallel worlds because of the the Holographic Principle. How would that effect things? I guess this would be similar to Aaron Bergman’s note – given a universe where there are only finite number of bits inside any surface area (due to Holographic Principle) and an infinite number of regions – or even a finite but big enough set of regions. Then unless most of the regions are constrained won’t there be regions with just about any entropy due to pigeon hole principle arguments? Thus there is no meaningful universal entropy.

That Holographic Principle seems very Laplacian by the way, very anti-quantum uncertainty feeling and even more anti-continuous space General Relativity feeling.

The thermodynamic limit does not exist for gravitating systems. You can get by with approximate thermodynamic equilibrium on short scales, but on large scales I have no idea what any of this means.

Another fascinating question! The number of states available for any thermodynamic system is increasing, which should be a clue to the solution.

Keep in mind – when Boltzmann formulated his contributions to the second law, he was without the aid of quantum mechanics… Unlike classical mechanics, quantum mechanics allows for low-entropy fluctuations to emerge from an overall high-entropy universe without violating the second law. Roughly speaking, quantumness safeguards the Universe from a classical heat-death.

Simply put – if Boltzmann had the luxury of a “quantum brain”, then he probably would have no need for an “anthropic brain”.

Asymmetric transitions.

http://pespmc1.vub.ac.be/ASYMTRANS.html

This occurs with matter gneration in a quasi-static version of Einstein’s static vacuum model, because you have to condense or compress his negative pressure energy down over a finite enough region of space to attain the matter density before you can make a real particle pair from this energy.

This has a rarefying effect on the vacuum which necessarily increases negative pressure, (e.g., the “hole” leaves a real hole in the vacuum), which drives expansion, while holding the universe flat and stable, since the increase in the matter density and positive gravitational curvature gets proportionally offset by the increase in negative pressure.

In this case, tension between the vacuum and ordinary matter increases as the universe expands, and this will eventually inevitably compromise the integrity of the forces…

boom

Young-Earth creationists are wrong, of course. Everyone knows the Universe was created Last Thursday. Think of it. It takes alot less effort. The whole 40+ gigalightyear radius Universe didn’t have to be created even once. Just a sphere about a light week in radius. Much smaller. Now then, attention to detail is important. The photons at a light week out have to be just the right frequencies and they have to be pointed in just the right directions, otherwise the effect is ruined. If there are any mistakes, though, they can be corrected Next Thursday.

I’m proud of God’s accomplishments. His creation of the WMAP probe, the Chandra X-Ray telescope, and HST were perhaps none-too-subtle, but on the whole, the Big Universe illusion really holds together. I’m hoping and praying that next week, He’ll create a little cooler weather.

Markk, I would agree with that. A similar argument for possible histories of the universe was given here.

I suggest we ask Slartibartfast about all this.

So if I was the universe and was to look back on my self, I would see my self as perfect and I would be evolving in to my past self, as I travelled into the future. I would be man at the moment, a perfect man in the past and will reach perfection in the future?

Can someone explain why the early universe is believed to have been in a low-entropy state? I have only had a course in undergrad stat mech, but it seems to me that a universe consisting of only a highly energetic uniform gas of elementary particles would be high entropy. Please excuse my ignorance.

Thanks, Bob

Count,

I love the final line of the abstract…. “unquestionably real”.

Nothing like unabashed confidence….

Elliot

I am under the (perhaps mistaken) impression that given a sufficiently fine-grained view, entropy is actually a conserved quantity. Certainly this is true for collisionless systems, for which Liouville’s theorem ensures that the phase space volume a system occupies is conserved. Classically this holds even for a gas, if I am able to follow all individual gas particle’s positions and momenta and can describe the particle’s interactions with a Hamiltonian. Does a quantum description somehow save the day and allow entropy to increase?

If entropy is in fact ultimately conserved, perhaps an explanation of the seeming progression of time should be looked for elsewhere? Or alternatively, is the arrow of time a coarse-grained (i.e. observer-dependent) phenomenon?

Elliot, indeed

The aticle mentions that Elvis didn’t die prematurely in some sector of the multiverse, so that’s good news for Elvis fans. This was probably the first PRD article that mentioned Elvis

mqk, unitarity time evolution leads to the same conclusion as in the classical case. This is in fact the source of the “information paradox” problem for evaporating black holes. If the “fine grained” entropy of a system could really increase then that would amount to a fundamental source of information loss.

Thanks for putting in the energy to write a great post about one of the trickiest and most controversial issues in physics! Shockingly, I agree with almost all of it. Like you, I’m a big fan of Huw Price.

A few random comments:

Sean Carroll wrote:

Just for people who haven’t thought about this much, it’s worth restating that even having low-entropy boundary conditions in the

pastinvolves an absurd amount of fine-tuning of our particular microstate. Putting low-entropy boundary conditions in both the past and future means we have an “absurd squared” amount of fine-tuning. We gain symmetry at the expense of more fine-tuning.There certainly

seemsto be no evidence for this symmetry. Indeed, if we take what astronomers see today as the last word, everything about the universe suggests a vastasymmetrybetween the beginning of the universe (Big Bang) and the end (indefinitely accelerating expansion). So, personally, I don’t find the symmetry between past and future philosophically pleasing enough to impose it at the expense of extra fine-tuning.On the other hand, nature could have more tricks up her sleeve – she usually does – and I doubt we’re close to the last word on this subject.

Have you read Huw Price’s ideas on how to search for starlight

coming from the future?It sounds like an insane idea until you remember that the equations governing light are symmetric under time reversal, and most starlight formed now should keep zipping through space into the very far future… so if the universe were time-symmetric and there was a time-reversed Big Bang in the far future, stars at that end of time might send lightback to the current era.It may still be an insane idea, but it’s harder to dismiss than you might at first think. In particular, you have to do some calculations to understand what “light coming from the future” is actually like (technically: advanced rather than retarded solutions of Maxwell’s equations).

You can read about these ideas in Price’s book.

Price has some even more speculative ideas about how an “absurd squared” amount of fine tuning could make classical mechanics look quantum… or at least strange. Normally we imagine the experimenter has “free will” when deciding to, say, measure the spin of an electron either along the z axis or the y axis. More precisely, we imagine this decision is uncorrelated with the state of the electron. But with an “absurd squared” amount of fine tuning, is this assumption still justified?

As you can see, this business gets pretty weird pretty fast….

Aaron Bergman wrote:

Yes – there are many layers of subtlety to this arrow of time business, and this is one. People like to use thermodynamic concepts like “equilibrium” when discussing the arrow of time, but in so doing they neglect that

even in Newtonian gravity, thermodynamic equilibrium does not exist, because the energy is unbounded below.Then, when we bring in general relativity, there’s another layer of subtlety: there’s not even a clear-cut concept of energy anymore! This is called the problem of time. It’s distinct from the problem of the arrow of time… but it gets involved when we try to use thermodynamics while taking general relativity into account.

Bob E. writes:

That’s a great question. A hot uniform gas does indeed seem to have high entropy… but that’s neglecting gravity. Let’s stick to Newtonian gravity at first. It turns out that if you have a big enough box of hot uniform gas, it’s not in thermal equilibrium: gravity will make it clump up, reducing its potential energy and increasing its kinetic energy, making it even hotter, raising its entropy! So, its entropy may look high, but it’s low compared to what it could be.

In fact, a “clumping” process roughly like this did occur in our universe – that’s basically how galaxies and stars were formed! This clumping process is not over yet, either. In Newtonian gravity it can continue indefinitely – so there is, in fact, no state of thermal equilibrium when we take gravity into account.

Life gets more complicated when we try to take general relativity into account. We need it to understand the expansion of the universe, and when enough clumping occurs we need it to understand the resulting black holes. But, as I mentioned earlier, nobody really knows how to combine thermodynamics and general relativity.

I’ll join Aaron Bergman and Bob E. in saying that I don’t quite get what this is all about, although my concerns are a little different.

Just to be clear, when I think of the “entropy of universe,” I’m thinking about the entropy of the quantum fields, excluding gravity. For the times that we are discussing (inflation to today) gravity can be treated classically, and therefore does not contribute to the entropy. (Correct me if this is wrong.)

Since entropy is an extensive quantity, it was low in the early universe because the early universe was very small. (It was also hot, but the smallness wins.) The universe expanded from this small, low entropy state very quickly. Since the expansion was not adiabatic, all sorts of great, non-equalibrium stuff got to happen, like life.

The analogy I have in mind is an insulated chamber with an adjustable volume (via a piston, for example). Start out with a small volume filled with water. The water is in equilibrium, the highest entropy it can achieve given the macroscopic constraints. If I slowly increase the volume, then the water evaporates gradually, everything cools a bit, and the final entropy is the same as the starting entropy. If I rapidly increase the volume of the chamber, the water bubbles and boils and eventually reaches a state of much higher entropy. The evolution of our universe is like the second case, with rapid expansion causing all sorts of fizzing and popping that leads to the non-equilibrium mess of galaxies, planets and physics blogs.

It seems to me the question is not “Why was the starting entropy so low?” The question is “Why was the starting entropy so high?” How did the universe get into an equilibrium state that could produce the homogeneous universe that we see today? Equilibration takes time, something that you don’t have at the beginning of the universe. I thought that this was one of the problems that inflation solved. Inflation happens after some sort of equilibration, expanding the equilibrated region to cosmic size.

Other interesting questions are “Why did the universe start out so small?” and “Why did it expand so fast?” but the increasing entropy just seems to come along for the ride.

Is there something I’m totally missing here? Do I net to treat the gravitational entropy? Is this debate about earlier times?

Thanks for the thought provoking post. I obviously need more thought before I’m done with it.

Gavin

Pingback: Ars Mathematica » Blog Archive » Cosmic Variance on Boltzmann

The entropy of quantum fields unfortunately can mean different things. If you mean that there’s a density matrix and something like

S = – Tr rho log rho ,

then that’s a conserved quantity. If you mean some sort of coarse graining, then you’re still stuck with the same problems as in the classical sense.

As I remember, this statement depends on the existence of the thermodynamic limit.

As for homogeneity, I’m not sure I understand what you’re asking, but it turns out that the initial conditions for inflation seem to require uniformity on scales larger than the horizon at the time. In fact, the calmness you need, I think, is a very low entropy situation (if those words mean anything) — Sean wrote about this somewhere, so I’m sure he’ll correct me if I have it wrong.

Aaron,

I do mean with coarse graining. What are the “same problems as in the classical sense”?

Gavin

The lack of the thermodynamic limit.

To answer Gavin’s and other questions: the early Universe expanded quickly because c was much greater. Since hc appears to be constant, Planck value h was very small leading to a low-entropy state. The increasing entropy in quantum mechanics is intimately tied to Relativity.

I’m not sure I really understand what Aaron is driving at. Sure, it is not known precisely how to define “gravitational entropy”. But surely it is clear that, whether we know how to define it or not, there *is* such a thing. In other words, isn’t it clear that there is a connection between the arrow of time and the extreme smoothness of the geometry of the early Universe, whether or not we can *formulate* this in terms of familiar ideas from thermodynamics?

Not to muddy the waters here but gravitational entropy is given an interesting analysis by Roger Penrose. His reasoning “explains” the unusually low entropy in the early universe. He uses what he calls the “Weyl curvature hypothesis” constraining this portion of the tensor.

All in the “laypersons” Emperors New Mind.

His argument however does not appear to have achieved widespread acceptance as far as I can determine.

Some of you might find this paper interesting and relevant:

On the Origins of the Arrow of Time: Why There is Still a Puzzle about the Low Entropy Past by Huw Price (PDF, 63KB)

Sean, can you provide any pointers to what you’re talking about with this bit?

Are we talking about chemical processes in the brain? Or information theory? Or that Star Trek episode where McCoy posits that memory operates via tachyons?

Okay, probably not that last one. But whenever somebody brings up memory or perception in connection with cosmology — paging Julian Barbour — I get confused…

One basic disagreement about the ‘entropy of the Universe’ seems to be, what would be the Universe with the highest entropy.

One argument (essentially from holography) is that in any given volume the maximal entropy state consists of big black holes filling that volume.

Sean’s counterargument is that black holes evaporate into the surrounding space. Now these two arguments are actually at cross purposes, because there is another fundamental question: how big is the Universe and does it have boundaries?

If the Universe is finite and closed then the black holes win the day. If it is infinite and tends asymptotically to de Sitter space then any black hole smaller than the horizon size will evaporate leaving just thermal bath.

I’m not sure how Sean’s proposal for ‘baby inflating Universes’ in a sea of time-symmetric de Sitter space is really different from Boltzmann’s ‘multiverse’.

Perhaps it is again the question of the total amount of space and stuff in the Universe. If the Universe did have a conserved, finite amount of space and stuff then the Boltzmann picture seems to be unavoidable.

The special property of inflation is that given the (very unlikely) fluctuation that sets it off, it automatically produces a whole lot more space and stuff with a peculiarly nice and uniform density distribution. I would assume that Sean wants to invoke this special property as solving part or all of the problem.

Dyson, Kleban, Susskind put forward here:

http://arxiv.org/abs/hep-th/0208013

the modern version of the ‘Boltzmann’s brain’ picture, and it’s still not clear how to get round it, unless you can show that inflation gives certain types of low-entropy region (like the one we seem to live in) a decisive statistical advantage over other low-entropy regions that arise directly from random fluctuation.

…whoops, slight misstatement there: de Sitter space is actually effectively finite because it has a horizon at a fixed distance from the observer.

Anyway this does not adversely affect the Dyson-Kleban-Susskind arguments.

For asymptotically de Sitter space with a region inside it undergoing slow-roll inflation, although there is still a de Sitter horizon, the volume inside the inflating region quickly becomes exponentially larger.

Why? It’s easy to see that thermodynamics works and makes sense on scales where the gravitational interaction is relatively unimportant, but I don’t see why that implies that thermodynamics should make any sense on universal scales.

(The quantity called entropy in a lot of cosmological calculations is a local thing, and I don’t see any reason why it should integrate to a useful global entropy.)

Strominger:Let’s move on okay:) Or why not just inject the pascalian triangle and pick what ever artsy numbers you like, to create the universe the way you like it? :)Use a “marble drop” to illustrate my point?

Some people do not like gravity under “any” conditions?

David,

Re: “memory”… I believe the act of storing information requires the expenditure of energy thereby increasing the entropy. That’s from an info-theoretic basis. No free lunch…Maxwell’s demon etc.

Elliot

Some might be quick to point out the issues and sources of such illusions about the outcomes of the universe?

There’s alway “fool’s gold” even amidst the nuggets? Really?

So “no” simplier entropic valuation? No supersymmetry?

Maybe then having reached such a “supersymmetrical state” the tunneling(QGP assumption here) allowed for this multiversian discription as…a pascalian one?:)

All “probable outcomes” are satisfactorially, okay? So where did this “point” emerge?

John Baez wrote:

There certainly seems to be no evidence for this symmetry. Indeed, if we take what astronomers see today as the last word, everything about the universe suggests a vast asymmetry between the beginning of the universe (Big Bang) and the end (indefinitely accelerating expansion).I don’t think that is what is necessarily what is suggested by astronomers observations. These conclusions require projections that aren’t necessarily the most natural solution.

For example, the most natural (naive) projection backwards in time doesn’t include inflationary theory, rather, it indicates that the universe had certain volume when the BB occurred.

That we had a big bang only indicates that big bangs happen, so if a universe with volume can have a big bang, then your conclusion might ought to be that the accelerating universe is once again racing toward that end.

What we actually know is that entropy always increases, so the most natural observational conclusion is that it always has and will.

And yes, I know just how naive that appears, but Einstein said that meant that “god” was talking”.

The Universe is based upon a very simple principle: Expansion of Space is indistinguishable from the forward flow of time. In math terms, R = ct. Scale R of the Universe is its age t multiplied by factor c. That is why the Universe expands, for as t increases the R expands. This is the first arrow of time, the Cosmological Arrow.

The Universe can’t expand at the same rate c continuously, for gravitation slows it down. Factor c is further related to t by GM = tc^3, where GM combines the Mass and Newton constant of the Universe. When t was tiny, c was enormous and the Universe expanded like a bang. As t increases expansion slows and continues asymptotically to this day.

Since the product hc is constant, h increases proportional to t^(1/3). Since R increases proportional to t^(2/3), the number of available quantum states in this volume increases. A growing h leads to an increasing entropy. That is the second arrow of time, the Thermodynamic Arrow.

If one calculates, the total energy of the Universe E = 0! It’s the ultimate free lunch, which is how the Universe expanded from a tiny size to the complexity we observe today. You needn’t add any energy to the system. Increasing h leads to complexity of the Universe.

Now we can state some initial conditions: At time t = 0, R = 0 and the Universe resembles an initial singularity. We also have h =0, for in zero size there is no uncertainty in the position of anything. The value of c approaches infinity. These initial conditions are extremely unstable, an initial extremum. A nearly infinite c caused the Universe to expand at an enormous rate, slowed by gravity.

Thoughtful replies are surprising and welcome!

Thoughtful replies are surprising and welcome!As t increases expansion slows and continues asymptotically to this day.Huh?

I disagree with the premise: “The lowest entropy state is one in which Boltzmann’s brain materializes out of the vacuum”. Let’s say that the brain consists of 10^27 nucleons. How many different ways are there to configure 10^27 nucleons? Something of order (10^27)!, right, modulo some sort of combinatorics? Only a tiny, tiny subset of those states actually turn out to be (or to self-assemble into) Boltzmann’s brain: other equally likely states are Boltzmann’s recently-deceased brain, five copies of Boltzmann’s medulla tied together with twine, a pot of arsenic-laced coffee, or a remaindered hardcover copy of “The Da Vinci Code”. Of course, a brain is

more likelyto emerge from a larger ensemble of nucleons—if you had 10^62 nucleons, for example, the chance of finding Boltzmann’s Brain would increase by a factor of 10^35 or so. (Regrettably, the chance of finding “The Da Vinci Code” also scales up. The chance of finding “The Da Vinci Code” micro-printedonBoltzmann’s Brain must not be ignored.)But when you’ve got 10^62 nucleons in the universe, there’s a much-more-probable fate for the ensemble. 10^62 is the Jeans mass, the mass which (in our universe, with our Hubble constant and our fluctuation spectrum, etc.) is able to gravitationally self-collapse to form stars. Once you’re gravitationally collapsin, something in the neighborhood of 100% of available states lead to massive stars, and 100% of massive stars lead to supernovae, and 100% of supernovae produce heavy elements. Now, perhaps the odds of getting

planetsout of an arbitrary one-Jeans-mass collapse are low, but they’re not crazy-exponential-factorial low. So, by thermodynamics standards, we can say that an appreciable fraction ofall possible configurationsof 10^62 atoms results in planet formation. And—well, once you have planet formation, there’s no way to grab all of those nucleons and stop them from undertaking chemical reactions. At that point, evolution takes over, and it’s (again, in the sense of “it’s not forbidden to the tune of 10^23 factorial”)reasonably likelythat intelligent life emerges.So: let’s say that “entropy fluctuations” in the early Universe can produce a cool-ish blob of 10^62 atoms. Over the set of all possible states of this blob,

very, very (!) fewgive rise to Boltzmann’s brain in the pop-into-existence scenario, whilea largeish fractiongive rise to intelligent life, including brains, thermodynamicists, and so on. Therefore, by the anthropic principle, the “average conscious observer” finds him or herself on a planet around a population-I or population-II star, in a universe of at least one Jeans mass, since this is a fairly typical result of random configurations of atoms.Of course, this assumes that the laws of physics are fixed, and we’re just wondering where the original matter and entropy came from. Furthermore, if we need to assign probability distribution to the

size of the blob, then it’s not obvious that P(get 10^27)xP(make brain) is less than P(get 10^62) … ugh. Even logarithms don’t help me think about these numbers.I was under the undoubtedly very naive impression that endless inflation scenarios explained this if successful inflationary regions are both somewhat initially ordered and expanding. Which Aaron’s commentary seems to suggest: “the initial conditions for inflation seem to require uniformity on scales larger than the horizon at the time”. The system doesn’t develop to the next stage until the right condition occurs.

Re: huh?

Expansion and slowing can be predicted from the math. Asymptotically means that expansion will never slow to zero or reverse.

Jack,

Let me elaborate a bit on Aaron’s response, and them I’m going to see if I can get past the thermodynamic limit concern.

It is difficult to define an energy to the gravitational field or a potential energy to objects in a gravitational field unless the space time has certain special properties (asymptotic flatness or a timelike killing vector field), properties that our Universe doesn’t have. You might think that even without those properties there must be an energy of the field, even if we don’t how to write it down. You would be wrong. (Or you might not think that, and you would be right.) There is no energy of the gravitational field and there is no gravitational potential energy in general situations.

There is, in all situations, a local stress-energy-momentum tensor that is locally conserved (divergenceless). The stress-energy-momentum tensor does not have any terms from the gravitational field, and if you try to add such terns the conservation law is lost. The time-time component of this tensor is the local energy density, so energy density likewise has no contribution from the gravitational field. The “energy of the gravitational field” is a nonsense concept.

Even worse, this energy density described above cannot be used to generate a total energy of the universe that is conserved. Any integration to find the total energy is going to depend on the choice of coordinates at every point. Even you you pick natural coordinates for our universe, the total energy that you get isn’t conserved.

My intuition is that the story is about the same for entropy. There is a local entropy density, which may actually be just one component of something more complicated, which obeys a local “non-decreasing” law (the divergence is nonnegative, for example). However, I expect there is no way to make this into a global entropy. Actually this isn’t just my intuition, you can find more in “Gravitation” by Misner, Thorne and Wheeler in the sections on thermodynamics and hydrodynamics, especially problem 22.7 which will guide you through the derivation of an entropy four-vector and the associated local second law of thermodynamics. Good luck.

Aaron,

Does the above cover your concerns about an “entropy of the universe.” When I’ve seen people claim that the entropy of the early universe was small, I assumed that they meant that the local entropy per co-moving volume was small. (For spectators, the entropy density was large because it was hot, but the co-moving volume scales with the size of the universe, so it was much, much smaller in the early universe, leading to a small entropy per co-moving volume).

Now that I’ve seen this discussion, it appears that some people do not mean local entropy per co-moving volume; they actually want to talk about the total entropy of the universe. Such a concept requires some serious explaining that I haven’t seen.

Since local entropy per co-moving volume does not include any gravitational terms, have I satisfied your concerns about the thermodynamic limit?

Gavin

Again we run into the problem that lots of things go by the name ‘entropy’ and they don’t always agree. There is a quantity, s, often, that shows up in fluid dynamics and the like that acts like a local entropy density. The problem is relating this to something statistical mechanical like the microcanonical or canonical entropy (which don’t necessarily agree, BTW). Even in situations where the thermodynamic limit does exist, my recollection is that the integral of this local quantity does not (except for an ideal gas, maybe) give you the correct global entropy.

Fluid mechanics works, of course. It’s a long jump from that to stuff like that in the original post, however.

On the other hand, thermodynamic reasoning has been rather successful in understanding black holes. When I sit down to think about it, I end up concluding that that success is really very mysterious. I feel like there’s something very important that it is telling us, and I have no idea what it is.

Sorry for being too busy to answer any comments since the original post. Traveling will do that. Briefly:

— Taking gravitational entropy into account is absolutely necessary, the whole point really. The entropy of our current universe is dominated by gravity. The entropy of all the matter and radiation within a Hubble volume is about 10^88, while the entropy of a single million-solar-mass black hole is 10^90, and there are many such black holes (one per large galaxy).

— Having said that, it’s certainly true that we don’t have a rigorous definition of the gravitational entropy, or even an understanding of what the degrees of freedom are, and both of those would certainly be nice to have. But even without them, it’s still clear that there is a problem. The early universe doesn’t look anything like the late universe. And more specifically, there aren’t that many ways to take the degrees of freedom in our current Hubble patch and arrange them into a tiny smooth region with GUT-scale energy density, while there are many ways to arrange them in a huge nearly-empty configuration. So it’s the early universe that is very fine-tuned.

— Inflation doesn’t solve the problem by itself, for the reasons just mentioned. The proto-inflationary state is incredibly finely-tuned; invoking inflation just begs the question.

— The closest thing to “thermal equilibrium” in the presence of gravity is

empty space(which will be de Sitter if you have a positive vacuum energy). It would be exactly equilibrium if it weren’t for the possiblity of baby-universe creation. How do we know? Anything other configuration will evolve into something else, implying that it wasn’t in equilibrium. Alternatively, if you give me your favorite configuration of matter, I can increase its entropy by expanding space and scattering its constituents to the winds.— Ben M, your starting point of a Jeans-mass cloud is nowhere near thermal equilibrium, so what arises from fluctuations therein is a very different question. It might seem pretty close to equilibrium, but gravity changes everything.

And Allyson — Green Lantern is awesome.

Louise said:

Expansion and slowing can be predicted from the math. Asymptotically means that expansion will never slow to zero or reverse.No, not that “huh”… I’m fairly sure that we have independently supported observational evidence that exactly the opposite is true. That, “huh”.

Sean,

Please indulge the following gedanken experiment…Your response would be greatly appreciated.

Suppose at the next instant gravity were to “disappear”. Wouldn’t everything move toward a higher state of entropy (thermal equalibrium) as gravity was no longer holding things together. So doesn’t this pose somewhat of a paradox in that the current state of the universe (including gravity, structure formation etc.) has a much lower entropy than a universe without gravity.

So my query is that if the description above is correct, and removing gravity would raise the entropy of the universe, then why does gravity add to the entropy of the universe instead of lower it?

Hopefully this makes sense. (even if incorrect)

Elliot

Sean,

I’ll be sure to include the entropy of black holes. We have a formula for it, so it won’t be any problem.

I could say exactly the same thing about the gravitational energy. In that case the reason we don’t have a rigorous definition is because, aside from special cases, the whole concept is nonsense. How do we know that gravitational entropy won’t meet the same fate? It seems to me quite likely that it will, and we will be left to look only at the local entropy density of the remaining, non-gravitational stuff.

Gavin

Elliot– according to the Second Law, things are always moving toward greater entropy, in accordance with the laws of physics. If you change the laws of physics (e.g. by removing gravity), everything changes, including what counts as a “high entropy configuration.” Without gravity, the number of degrees of freedom are different, as are the conservation laws.

It’s not true that “the universe would have a higher entropy if it weren’t for gravity.” It’s just that the process by which entropy increases looks different with and without gravity.

Gavin, it doesn’t matter. We know that the universe was in a very special state close to the Big Bang, because it rapidly evolved into something else. And nobody thinks that a re-collapsing universe would naturally smooth itself out during a Big Crunch. So why was it like that?

Awesomely lame. What kind of superhero is rendered incompetent by the color yellow? A two year old with a Crayola could take him down and steal his lunch money.

That was supposed to be impotent. Closely related but not the same.

Sean “according to the Second Law, things are always moving toward greater entropy, in accordance with the laws of physics. If you change the laws of physics (e.g. by removing gravity), everything changes, including what counts as a “high entropy configuration.” Without gravity, the number of degrees of freedom are different, as are the conservation laws.”

Accordingly:Area of a future, is larger than the present, present Area is larger than the past.

Area (expansion_energy) of the future should contain “less” energy signature than a past, as it is dispersed over a greater area?

Thus, Maximum Entropy of the Physical Universe configures, specifically over Time, and tends to a future direction resolution.

The only way for physical systems to lose energy (anhilation), is during a Phase Transition?

Even if the Physical Laws (1st,2nd..) hold true “now”, there is more Entropy in the future in the form of potential conversion processess, namely Negative Energy?

Positive Energy is decreasing from the big-bang to “now” , and negative energy is increasing from “now” to the next critical “future” phase.

The geometric considerations for Gravity in a future “low_thermal_entropy” Universe, is that it ends as a Wavfunction Collapse process, which neatly ties in with the initial early Condition?..

Question:How much ‘Anti-matter’ was lost during the early Universe phase transition, and is the total quantity responsible for the Arrow of Time?

I was going to put together a long reply to Aaron and Gavin, but luckily Sean said it better first. What I was trying to say was that difficulties about the definition of entropy in the presence of gravity, while real and interesting, don’t really matter to the arrow of time question. Clearly the early Universe was in an incredibly “special” state, and clearly this is the origin of the arrow of time. Whether this special initial state can justly be described by saying that “the entropy of the Universe was initially low” is beside the point. Actually, though, we do know how to measure the entropy of a black hole, which is not a “local” object in the relevant sense, so I would guess that problems defining “the entropy of the Universe” are not insuperable, provided that the Universe is spatially finite — which it obviously is.

Gavin: I agree entirely with everything you said about energy. I don’t see why those specific problems with the concept of “gravitational energy” should extend to entropy.There’s no such thing as “gravitational force” in GR, so gravitational energy is bound to be problematic. There’s no analogy with entropy.

I believe the multiple uses of the word entropy in both physics and information theory only add to the confusion in this arena. (at least for me)

Thanks for the response.

Elliot

Evidence from independent sources is that c has been slowing. Redshofts are the only evidence of cosmic acceleration, and these are related to c. This is, I understand, not a popular thing to say.

So, when ratio v/c appears to be accelerate, we are not seeing v increase but c decrease. The CMB suppots this too. Has this not occurred to anyone?

O. K. at the risk of beating a dead horse…. Sean re: your response in #49, would that still hold if gravity was “dialed” up or down in strength instead of an on/off switch? If gravity was still there there but weaker or stronger, the conservation laws and degrees of freedom would not change. But the entropy would vary with the strength of the gravitational field.

In information theory entropy is strictly defined in terms of the compressiblity of the information describing an ensemble. Maximum entropy means you need to describe each thing where lower entropy allows for the # of bits describing the ensemble to be less than a list of every item.

Sorry if I am pressing too hard on this but it seems there maybe something very fundamental here and I am just trying to understand.

Thanks,

Elliot

Yes, it would still hold — you’d still be able to increase the entropy of a system in a new way, namely by expanding the universe. The strength doesn’t really matter.

Sean,

Special? This only shows that the state is special if we believe that there is a second law of thermodynamics that applies close to the Big Bang, describing some total entropy of the universe. Since I am skeptical about a definition of total entropy that obeys the second law in this situation, the “special” claim seems like a matter of opinion.

The only thing I see special about the early universe is that it is expanding very rapidly. If I take a tiny piece of rapidly expanding universe and consider all the ways of putting some GUT temperature stuff into it, nearly all of them are smooth. It looks just like what I would see in a furnace set to “GUT.” Smooth does not seem special to me; smooth seems perfectly typical.

The state rapidly evolves into something else because it is, as I mentioned, expanding very rapidly. I don’t know why it is expanding rapidly, but that seems like a question we can at least define, and possibly answer. The question of why the starting state is “special” doesn’t even seem well defined.

Jack,

You don’t see any obstacles to showing that there is a definition of the total entropy of the universe that obeys the second law of thermodynamics. Maybe I am just not that smart, but I see a lot of trouble ahead. However, I’ve learned never to say something can’t be done, so I’ll look for you on the arXiv. Until then we have a difference of opinion, and it seems premature to use second law arguments that may not apply to discuss an entropy that may not exist. (Note: black hole entropy is treated in spaces with asymptotic symmetries, something our universe doesn’t have. In these cases gravitational energy makes good sense as well.)

This is all very relevant to the arrow of time question. The universe could have started in a rapidly expanding but otherwise very typical state. The rapid expansion throws it out of equilibrium, leading to an opportunity for increase in the local entropy density everywhere, leading to an arrow of time everywhere. No low-entropy, fine-tuned, very special starting point required.

Gavin

For an individual, life starts with a cell, and then it multiplies while it adapts to the environment. A being is an adaptive semi-autonomous system. It works with a feedback. Schorodinger’s Equation has also a feedback aspect to it;

i*hb*(d/dt)*Psi = H*Psi

The universe also seems to have started small. Maybe we are making a mistake by trying to include everything in the beginning into a very small container. If the universe is a being, it would have started from an embryo and eaten it’s environment to accumulate entropy.

Consider the mapping of M6 classification of sciences to aspects of being and to elementary particles;

0 Mathematics … Measure ,,, Higgs

1 Physics … Move ,,, Light

2 Chemistry … Feed ,,, Weak

3 Biology … Sense ,,, Quark

4 Psychology … Feel ,,, Lepton

5 Intelligence … Think ,,, Space-Time

6 Aesthetics … Love ,,, Mass-Gravity

I’d like to follow up on a criticism of Sean’s work raised, very sportingly, by Sean himself. Namely: if the Universe was born as a baby universe from some pre-existing de Sitter space, then why does it contain more than one galaxy? The Boltzmann’s Baby Paradox…..is there any known way of attacking this problem?

There are only 7 natural objects with atomic symmetry;

0 Higgs

1 Atom

2 Cell

3 Planet

4 Star

5 Galaxy

6 Universe

Assuming Higgs level would also be quantised. Maybe the reason for another level after galaxy is that this atom fractal obeys the same overall symmetry of elementary categories.

Sean:

“The proto-inflationary state is incredibly finely-tuned; invoking inflation just begs the question.”

I don’t see that it needs to be question-begging.

If inflation needs a special initial state to occur, for example for baby-universe creation, endless inflation will impose that state on each new universe or region that succeeds to start inflate. The other states will be sorted out by the failure. (It seems to me to be like a maxwellian demon who opens a gate when a state is acceptable; except that the “inflationary demon” is without memory.

Aaron’s commentary seems to suggest such a special state: “the initial conditions for inflation seem to require uniformity on scales larger than the horizon at the time”.

I’ve been thinking about this concept for some time, but I had no idea anyone else was. I took what I read in “Brief History of Time” and ran with it. I feel less crazy for reading this, though I’m not sure how much I agree with. I do agree that there need not be a corresponding crunch for the big bang; nor must there be a corresponding bang if it’s really a crunch.

Questions: Does the expansion of the universe affect the rate at which time proceeds? If there is a universal cycle (hypothetically) from bang to crunch, with a corresponding increase and then decrease in entropy, observers on either side of the maximum would perceive time proceeding towards that maximum, no? And “at” the maximum, would the perception of time stop? More accurately, as one approached from either side, would time slow down such that you could never reach the maximum (sort of like falling into a black hole)? Because if so, how can information travel from one side of the max. to the other? And wouldn’t this create the perception of a constantly expanding universe (

verysimply, rate of expansionrover timetas t goes to zero remains positive; my calculus is beyond rusty, though, which is why I’m asking)? Time for our observers wouldn’t actually slow, though, would it? They’d still see it as one second per second, so how would they know?Can the unidirectionality of gravity have any relation to the arrow of time? In a time-reversed view, gravity is always repulsive, right? I’ve heard of lab experiments in “anti-gravity” with superconductors, but from what I’ve read, they have more to do with cancelling the effects of gravity, or shielding objects from them.

I’ll have to think out loud more on this in my livejournal.

To answer N@: Expansion of Space is indistinguishable from the forward flow of time. There is no known function for t; perhaps one of you smart people will someday figure it out. No special initial state is necessary, at all times we have R = ct and GM = tc^3.

You want a micro-perspective view, yet, one would “not” include gravity?

Put away the “monte carlo” methods and trying to explain the nature of quantum realities??:)

Pingback: Nonoscience / Another Localized Blackout on Blogspot?

Hi Louise

I liked the wombat picture! Since Island managed to say

huhtwice without thinking about it, I thought I might add a few words.In QG we don’t like to think of

theuniverse as something fixed and objective. There is no universal observer – except the universe itself in some sense. However, if one asks what it would be like toobserve a universe(or rather, take a large number of observations that resemble the classical universe) then the identification of time’s arrow with entropy makes perfect sense, as you say. To put it another way: the estimation of cosmological epoch is, like anything, alocalmeasurement. A different class of observer in our present on Earth is quite capable of perceiving itself to inhabit a different epoch, and there is no contradiction here because the universe isnotsomething that’s out there somewhere. The implementation of Mach’s principle cannot be achieved with such a classical view of spacetime observables.Nice to know the data agrees, heh?

Louise, by “independently supported”, I meant that the kinematic interpretation of the SN Ia sample provided strong evidence that there was a transition between deceleration and acceleration that they called a “cosmic jerk”.

And then there’s dark energy…

Otherwise, Eddington lives!… ðŸ˜‰

They never learn.

Sorry, Layman thinking. Scratch, scratch….below, is no good for “time variable measure?”

Since you mention it, Island, GM=tc^3 also explains Eddington’s large-number hypothesis. One could question the wisdom of calling something a “cosmic jerk,” naming a “dark energy” after what Sith Lords peddled, or christening “Concordance” cosmology after a plane that crashed and doesn’t fly anymore.

However, it is time to question the “independently supported” mantra. Evidence of cosmic acceleration was published simultaneously by two groups in 1998. One group was headed by Saul Perlmutter of LBL. The other lead author was Adam Reiss of Berkeley, whose office was only 500 m from Perlmutter’s. Both groups looked at the same phenomenon, redshifts of Type Ia supernovae. The independence was that Perlmutter was head of one group and Riess part of another.

Redshifts are the only evidence of cosmic acceleration. The CMB says nothing about acceleration. In fact, the CMB is good evidence that c has changed. Average temperature is the same over large parts of the sky, indicating that large regions were in causal contact. Even at the time of recombination 300,000 years after the Big Bang, c was much greater.

Additional evidence comes from Active Galactic Nuclei. These massive primordial Black Holes are observed to have formed shortly after the Big Bang. This could only occur if the horizon distance determined by c was much greater. There is even more precise evidence from a nearby star.

If one knows how to read a Graph, it can be seen that GM = tc^3 precisely predicts redshifts of Type Ia supernovae, even in the transitional “jerk” period. All this has been shown without inferring repulsive ‘dark’ energies.

Kea, you are right, they never learn. It is good that QG is now considered a subject of research. True independence comes from knowing the data and reaching our own conclusions rather than repeating the herd. Someone has to be the child saying that the emperor has no clothes. (A physicist would say that they were dark energy clothes!)

Okay Louise, I give, but I think that you meant Dirac’s Large Numbers Hypothesis, which I am actually a bigger fan of than you might think for it’s deep relevance to the anthropic principle. Maybe I’m missing something obvious, but I can’t find anything wrong with your idea, except that it takes the much hated fine-tuning issue to a new extreme. So, yeah, popularity is everything and you’re screwed.

I don’t find causality in your model though, so I’m guessing that you’re going with a random quantum fluctuation to get all that negative pressure to appear from nowhere.

In which case, I don’t think that it’s going to be necessarily preferred when all gets said and done.

Brian Greene wrote a terrible book, with a chapter dedicated to this same topic! Shame!

The reasoning couldn’t be more flawed! Coming from a brilliant mind, it hurts!

Please don’t let smart people write stupid books on intriguing ideas!

The anthropic coincidences mostly occur over a fine “range” of potential, and nobody really understands what this range is for if the forces are “set” as illustrated in the physics lecture on the anthropic principle at this website:

http://abyss.uoregon.edu/~js/images/instability.gif

In other words the coincidences should be set-up exactly balanced as this is idealistically depicted in the illustration, where “any” perturbation causes the runaway effect, (like runaway expansion). This is not true, however, if something causes the pencil to lean back to the left after something causes it to lean to the right, where anthropic selection does indeed still occur at an exact balance point.

As I understand this, the “range of potential” increases in one direction only as the universe “ages”, so anthropic selection remains fixed between whatever relevant extreme runaway tendencies are involved in the given coincidence, but that ideal location slides progressively forward with time in order to remain between a uni-directionally increasing range of potential.

mqk writes:

We don’t need to assume the system is “collisionless” – Liouville’s theorem applies to any classical system described by a Hamiltonian on a phase space that’s a symplectic manifold.

(If you don’t know what all that jargonesque fine print means, just pretend I said “any classical system”. The fine print is designed to rule out certain funky classical systems you might rather not know about.)

Nope: the “entropy conservation” theorem you mention has an equally general quantum version. Whenever the space of states is a Hilbert space and time evolution is given by a unitary operator, the entropy tr(D ln D) of a density matrix D is conserved.

So, as you hint, to get entropy to increase, most people switch to a coarse-grained definition of entropy, and make certain assumptions on the initial state of the system. This is what Ludwig Boltzmann did in his marvelous H-theorem!

But, the H-theorem is not the last word on the subject. We need to see if we can justify the assumptions of this theorem – in particular, the Stosszahlansatz, or “assumption of molecular chaos”. And that’s where the real fun starts….

So, the arrow of time is a subtle thing even before take gravity into account. But in our universe, gravity makes it a lot

moresubtle.Island, Dirac’s Large Number Hypothesis is a facinating. Eddington used a similiar basis for his Fundamental Theory. A nearly infinite c can start a Big Bang without the usual need for negative pressure.

Dirac’s large numbers hypothesis is also resolved when you increase both the matter density and negative pressure in dispropotionally equal “see-saw” fashion, per the physics that I previously gave here:

http://blogs.discovermagazine.com/cosmicvariance/2006/08/01/boltzmanns-anthropic-brain#comment-109844

So the volume of the vacuum is currently about 120 orders of magnitude greater than one particle in every volume equal to the Compton wavelength of the particle cubed. As Dirac suspected, this means that the size of the universe is directly proportional to the number of particles in it, because in Einstein’s static model if you condense vaccum energy, then you necessarily increase negative energy and pressure, as well, by way of rarefaction, so the vaccum necessarily expands during pair production.

The off-set increase in both mass-energy and negative pressure means that an expanding universe is not unstable, nor will it “run-away”, because Dr. Einstein’s equation…

g=(4pi/3)G(rho(matter)-2rho(vacuum))R=0

… works just fine with vacuum expansion, while at the same time repairing the gravitational flaw in Dirac’s Large Numbers Hypothesis when both particles in the pair leave real holes in the vacuum.

The graviational acceleraton is zero if the density of the static vacuum is -0.5*rho(matter) because,

rho+3P/c^2=0

If you condense enough of this vacuum energy over a finite region of space to achieve postive matter-density, then the local increase in mass-energy is immediately offset by the increase in negative pressure that occurs via the rarefying effect that real particle creation has on the vacuum.

That means that created particles have positive mass, regardless of sign, and this resolves a very important failure of particle theory, becuase it explains how and why there is no contradiction with the asymmetry that appears to exist between matter and antimatter. This is the reason that we don’t observe nearly as much antimatter as particle theory predicts exists, because the energy that comprises the observed antimatter particles normally exists in a more rarefied state than observed antiparticles do.

Of course, this requires that we dump a lot of assumptions that are commonly taken for granted about a quantum gravity theory that doesn’t even exist, contrary to the opinion of your infinity worshiping buddy… ðŸ˜‰

In QG we don’t like to think of the universe as something fixed and objective.Give me a freaking break!

Correction:

The off-set increase in both mass-energy and negative pressureShould have read:

The off-set increase in both the matter density, (positive gravitaional curvature), and negative pressure…

Mass-energy remains constant, of course.

Give me a freaking break!I am quite keen to hear your explanation of how one can have both (a) a theory that operates with a single objective external universe roughly independent of one’s observations and (b) a theory with well-defined quantum spacetime numbers whose corresponding observables cannot all be tied to that of a universal observer. Moreover, the theory must of course be capable of recovering both GR and a rigorous formulation of the standard model.

It would indeed be foolish to deny the existence of the elephant, but that is not what I meant. You have your part of the elephant here and I have mine here and there is no way that either of these is the

wholeelephant.I’ll make a deal with you Kea, since I personally cannot do the following, maybe you can, and that will settle this once and for all for me. YOU, have everything to gain, (in a really big way), and I have nothing to gain or lose, except this curse:

In Einstein’s static model, G=0 when there is no matter.

He initially added the cosmological constant to balance things out, because we do have matter, but you have to condense the matter density from the zero pressure metric in order to get rho>0 from of Einstein’s matter-less spacetime structure, and in doing so the pressure of the vacuum necessarily becomes less than zero, Pnever been proven wrong. He simply didn’t know something that quite obviously does justify his argument that the universe is finite, even though it is expanding, and it will not run-away, so there was no logical reason for him to abandon this model, given what is now obvious and factually verified information about the particle potential of the quantum vacuum.

Dirac’s Hole Theory works

(without need for a reinterpretation of the negative energy states)to hold this model stable and “flat” as the universe expands, because particle creation becomes the mechanism for expansion when the normal distribution of negative energy does not contribute to particle pair creation, which can only occur in this vacuum by way of the condensation of negative pressure energy into isolated depatures from the normal background energy density.So it is my strong suspicion that the Dirac equation will work in this background to unify GR and QM in the exact same manner that he did SR and QM.

Write down the basis of wave functions in this background, including an expansion of the field in corresponding creation and annihilation operators – compute the stress-energy tensor in that background – quantitatively describe the vacua – and then work out the matrix elements of the stress-energy tensor between the vacuum and the one-particle states and see what happens.

Layman scratches head again and again…

If the conditions are found to be inherent in high energy physics(QGP) then how would such a condition run counter-intuitively to what curvature had been implied?

A “state of equillibrium” in a highly curved world?

Georgi Dvali:A “determinism” at planck scale?

Hi Island. I’m not really sure what the contention is here. You bring up an interesting point.

In Einstein’s static model, G=0 when there is no matter.Rather, G=0 when there is no matter density, no mass generation, just as G->0 in the new class of massless spin foam QFTs. The use of a so-called (LQG) cosmological constant to ‘perturb’ about this point doesn’t mean taking Lambda literally in the classical theory. On the contrary, this sort of perturbation appears to destroy the validity of GR at large scales.

In the Cornell thread you recently pointed out that

In the static state, pressure is proportional to -rho …which, again, is a kind of topological condition in the spin foam constructions.

Dirac’s Hole Theory works to hold this model stable and “flat” …It turns out that we need a better physical picture than this to get everything to work, so this is where we begin to disagree. ‘Flatness’ should be a direct result of a Machian principle, which of course was never implemented in GR. But any naive attempt to consider the mutual dependence of local acceleration and ‘distant’ stuff runs into a problem analogous to that of instantaneous action in Newtonian gravity, so something has to give.

A determinism at planck scale?Determinism here means that the Planck scale itself goes to zero as hbar does. L^2 = G(hbar)/c^3 which means that L goes like c^(-2) in Louise’s picture. In Padmanabhan’s thermodynamic gravity the Euler equation sees horizon area entropy in that S = (A/4)L^2 = (A/4)c^(-4).

The point being, of course, that the RHS is a time parameter.

I realise that Sean is travelling, but it would be really good if somebody would weed out all the obvious psychoceramictry in this thread — I nearly missed John Baez’ comment because of it. Worse, it killed off the serious discussion that was going on here.

Sean – thanks for the post. I could follow quite a bit of it! and the best bit for me was your explanation of why arguing from anthropic principles wasn’t as simple as it at first seems when you hear of the idea (I mean, we’re here, aren’t we?)

now, as for following the comments …

cheers

a>

Wow – I hadn’t been paying attention to this thread. Since I’m not sure if Sean is able to look at it currently, I’m going to ask commenters myself to please stay on topic. I know some of you have your own ideas about a number of different areas of physics, but the best place for discussing those is on your own blogs, if you have them. If you don’t, then you could start one. Please don’t get into them here. Thanks.

Maybe you have the Boltzmann brain the wrong way round, It’s not ” hey I exist”, but “Hey! How come I never existed?” When the only real probability shows that I did.

I think all the posts have been on topic. Unless we’re only allowed to talk about Boltzmann and Price, in which case I apologise.

No need at all to apologize Kea. I’m referring to the repeated discussions of people’s own, personal “theories”. They come up on many of our posts and we usually try to stop them derailing the discussion. Cheers,

John Baez:

Was the universe ever really flat? Sorry for layman generalizations, with increase in curvature, it being qunatum dynamcial views seems consistent with and up to a point?

While these seem like “simple generalized deductions” I think one must want to have current “experimental developement” caring the thought processes along some road currently being explored.

While one talks about “reductionistic processes” we are still refering to the universe as it was developing along the microseconds, “still” within the capability of the universe in expression.

Not pet theories. Strominger’s theory(?) perhaps along with the basis of particle creation “pointing” the way towards entropic complexity and expansion, in the resulting particle showers?

Does this all fit?

okay,

If you allow “monte carlo” methods, then I suggest the valuation of “Boltzman’s brain” held a time of “illumination” and supersymmetriclaly explained the universe in expression?

“Reimann hypothesis” has to have some (phenom)validation process?

So ya, here is one way to occupy your mind while explaining supersymmetry?

Pingback: Rapped on the Head by Creationists | Cosmic Variance

The foregoing seems to assume that the laws of physics are time-reversible. It has always seemed to me that both quantum mechanics (at least in the Copenhagen interpretation) and general relativity are not. Wavefunction collapse can’t be time-reversible. And what about black holes? Matter can fall in but can’t escape? If GR were time-reversible then under suitable initial conditions matter could be ejected from a black hole (and I’m not talking about Hawking radiation).

(my personal opinion is that wavefunction collapse doesn’t occur and black holes don’t exist, but I only studied physics up to 2nd year of university, so please correct any misunderstandings I’ve made)

Hugh, general relativity is definitely time-reversible, although specific solutions might not be. The time-reversed version of a black hole is a white hole, which is a perfectly good solution to Einstein’s equation. We don’t see white holes in the real world, but that’s precisely because of entropy considerations.

I think the same is true for quantum mechanics, but will readily admit that I don’t understand the details and might be wrong. Wavefunction collapse ala the Copenhagen interpretation is definitely not reversible, although evolution according to the Schrodinger equation definitely is. My suspicion is that a more complete understanding will be able to derive the apparent collapse of the wavefunction from ordinary Schrodinger evolution plus thermodynamic considerations, but I don’t think this is well understood right now.

Time reversed scenario’s/equations, must take into considerations some fundamental processes?

What happens for a reversed Blackhole process, white-hole?, could not occur without DarkMatter becoming visible?..Darkmatter would actually have to be the radiative source of visible Matter.

Einstein field equations have an expression for Darkmatter to convert to Light Matter, entropy in a time-reversed universe would insist that particle collisions, become more feable and less energetic, producing less visible light from atomic interations, light would tend to be fading to grey!

The further one travels back along a “time-reversed” arrow, the more one becomes entangled into a “ONE-PARTICLE” quark soup ?

The initial state may be comprable to that of a Bose-Einstein-Condensate singularity ?..a “one-particle” fluctuation would really be an “all-particle” fluctuation!

Pingback: Coast to Coast | Cosmic Variance

Pingback: Arbitrary Chronological Signifiers | Cosmic Variance

Pingback: Exploding Aardvark » WHICH IS MORE KNOWING: GOD OR A ROCK?