Certain subsectors of the scientifically-oriented blogosphere are abuzz — abuzz, I say! — about this new presentation on Dark Energy at the Hubblesite. It’s slickly done, and worth checking out, although be warned that a deep voice redolent with mystery will commence speaking as soon as you open the page.

But Ryan Michney at Topography of Ignorance puts his finger on the important thing here, the opening teaser text:

Scientists have found an unexplained force that is changing our universe,

forcing galazies farther and farther apart,

stretching the very fabric of space faster and faster.

If unchecked, this mystery force could be the death of the universe,

tearing even its atoms apart.We call this force dark energy.

Scary! Also, wrong. Not the part about “tearing even its atoms apart,” an allusion to the Big Rip. That’s annoying, because a Big Rip is an extremely unlikely future for a universe even if it is dominated by dark energy, yet people can’t stop putting the idea front and center because it’s provocative. Annoying, but not wrong.

The wrong part is referring to dark energy as a “force,” which it’s not. At least since Isaac Newton, we’ve had a pretty clear idea about the distinction between “stuff” and the forces that act on that stuff. The usual story in physics is that our ideas become increasingly general and sophisticated, and distinctions that were once clear-cut might end up being altered or completely irrelevant. However, the stuff/force distinction has continued to be useful, even as relativity has broadened our definition of “stuff” to include all forms of matter and energy. Indeed, quantum field theory implies that the ingredients of a four-dimensional universe are divided neatly into two types: fermions, which cannot pile on top of each other due to the exclusion principle, and bosons, which can. That’s extremely close to the stuff/force distinction, and indeed we tend to associate the known bosonic fields — gravity, electromagnetism, gluons, and weak vector bosons — with the “forces of nature.” Personally I like to count the Higgs boson as a fifth force rather than a new matter particle, but that’s just because I’m especially fastidious. The well-defined fermion/boson distinction is not precisely equivalent to the more casual stuff/force distinction, because relativity teaches us that the bosonic “force fields” are also sources for the forces themselves. But we think we know the difference between a force and the stuff that is acting as its source.

Anyway, that last paragraph got a bit out of control, but the point remains: you have stuff, and you have forces. And dark energy is definitely “stuff.” It’s not a new force. (There might be a force associated with it, if the dark energy is a light scalar field, but that force is so weak that it’s not been detected, and certainly isn’t responsible for the acceleration of the universe.) In fact, the relevant force is a pretty old one — gravity! Cosmologists consider all kinds of crazy ideas in their efforts to account for dark energy, but in all the sensible theories I’ve heard of, it’s gravity that is the operative force. The dark energy is *causing* a gravitational field, and an interesting kind of field that causes distant objects to appear to accelerate away from us rather than toward us, but it’s definitely gravity that is doing the forcing here.

Is this a distinction worth making, or just something to kvetch about while we pat ourselves on the back for being smart scientists, misunderstood once again by those hacks in the PR department? I think it is worth making. One of the big obstacles to successfully explaining modern physics to a broad audience is that the English language wasn’t made with physics in mind. How could it have been, when many of the physical concepts weren’t yet invented? Sometimes we invent brand new words to describe new ideas in science, but often we re-purpose existing words to describe concepts for which they originally weren’t intended. It’s understandably confusing, and it’s the least we can do to be careful about how we use the words. One person says “there are four forces of nature…” and another says “we’ve discovered a new force, dark energy…”, and you could hardly blame someone who is paying attention for turning around and asking “Does that mean we have five forces now?” And you’d have to explain “No, we didn’t mean that…” Why not just get it right the first time?

Sometimes the re-purposed meanings are so deeply embedded that we forget they could mean anything different. Anyone who has spoken about “energy” or “dimensions” to a non-specialist audience has come across this language barrier. Just recently it was finally beaten into me how bad “dark” is for describing “dark matter” and “dark energy.” What we mean by “dark” in these cases is “completely transparent to light.” To your average non-physicist, it turns out, “dark” might mean “completely absorbs light.” Which is the opposite! Who knew? That’s why I prefer calling it “smooth tension,” which sounds more Barry White than Public Enemy.

What I would really like to get rid of is any discussion of “negative pressure.” The important thing about dark energy is that it’s *persistent* — the density (energy per cubic centimeter) remains roughly constant, even as the universe expands. Therefore, according to general relativity, it imparts a perpetual impulse to the expansion of the universe, not one that gradually dilutes away. A constant density leads to a constant expansion rate, which means that the time it takes the universe to double in size is a constant. But if the universe doubles in size every ten billion years or so, what we *see* is distant galaxies acceleratating away — first they are *X* parsecs away, then they are 2*X* parsecs away, then 4*X* parsecs away, then 8*X*, etc. The distance grows faster and faster, which we observe as acceleration.

That all makes a sort of sense, and never once did we mention “negative pressure.” But it’s nevertheless true that, in general relativity, there is a relationship between the pressure of a substance and the rate at which its density dilutes away as the universe expands: the more (positive) pressure, the faster it dilutes away. To indulge in a bit of equationry, imagine that the energy density dilutes away as a function of the scale factor as *R ^{-n}*. So for matter, whose density just goes down as the volume goes up,

*n*=3. For a cosmological constant, which doesn’t dilute away at all,

*n*=0. Now let’s call the ratio of the pressure to the density

*w*, so that matter (which has no pressure) has

*w*=0 and the cosmological constant (with pressure equal and opposite to its density) has

*w*=-1. In fact, there is a perfectly lockstep relation between the two quantities:

n= 3(w+ 1).

Measuring, or putting limits on, one quantity is precisely equivalent to the other; it’s just a matter of your own preferences how you might want to cast your results.

To me, the parameter *n* describing how the density evolves is easy to understand and has a straightforward relationship to how the universe expands, which is what we are actually measuring. The parameter *w* describing the relationship of pressure to energy density is a bit abstract. Certainly, if you haven’t studied general relativity, it’s not at all clear why the pressure should have anything to do with how the universe expands. (Although it does, of course; we’re not debating right and wrong, just how to most clearly translate the physics into English.) But talking about negative pressure is a quick and dirty way to convey the *illusion* of understanding. The usual legerdemain goes like this: “Gravity feels both energy density and pressure. So negative pressure is kind of like anti-gravity, pushing things apart rather than pulling them together.” Which is completely true, as far as it goes. But if you think about it just a little bit, you start asking what the effect of a “negative pressure” should really be. Doesn’t ordinary positive pressure, after all, tend to push things apart? So shouldn’t negative pressure pull them together? Then you have to apologize and explain that the actual force of this negative pressure can’t be felt at all, since it’s equal in magnitude in every direction, and it’s only the indirect gravitational effect of the negative pressure that is being measured. All true, but not nearly as enlightening as leaving the concept behind altogether.

But I fear we are stuck with it. Cosmologists talk about negative pressure and *w* all the time, even though it’s confusing and ultimately not what we are measuring anyway. Once I put into motion my nefarious scheme to overthrow the scientific establishment and have myself crowned Emperor of Cosmology, rest assured that instituting a sensible system of nomenclature will be one of my very first acts as sovereign.

If I define a force as anything that causes a (macroscopic, non-quantum mechanical) test particle not to follow the geodesics of the geometry of space-time, is the “Dark, Misleading Force” a force on that definition? If not, why not? Or do you find this definition misleading in this cosmological context?

It’s certainly not a force by that definition. Galaxies are following geodesics.

I love your description of a “smooth tension”, because, to me, it seems similar to surface tension, like watching dishsoap coalesce grease into perfect circle of your dishpan.

As stated previously, I believe that there is only one force – entropy, which reveals itself nicely in a “smooth tension” / surface tension sort of way. The clusters of galaxies and the “fibers” in between with large “bubbles” of empty space look so much like the interaction of tensions with entropy determining it all.

So that’s why my mind got stuck when I was reading that sentence… Anyway, that also explains why dark energy is never called dark force (I asked this in Bad Astronomy).

Re my 1 and 2: So dark matter makes the geodesic geometry different from what it would be if there were no dark matter. What would be wrong with saying, on a local level, that the “dark force” on a test particle is proportional to the mass of the test particle? Or test galaxy? Alternatively, hedging in a slightly cute way, that the dark force is as much a force as gravity is?

Saying that gravity is a force is an empirically effective way of talking about local physics — to pretty good accuracy at most human scales — so presumably we could equally well talk about the dark force, provided we define the empirical scale we are talking about to be in the galactic cluster range, but we could not talk about a dark force on global, universal scales?

I suppose that this may run up against realist ontological objections from some that geometry is “real” but gravitational force is not. I would have to more-or-less agree with that objection because of the advance of the perihelion of Mercury and the other experimental tests of GR. At what scale is an experimentally based objection against thinking in terms of a dark force very clear?

Peter Morgan,

What we typically mean by a new force is a new interaction between two test particles. That is, if we took, say, two protons and put them near one another, then they would primarily have a gravitational attraction and an electrostatic repulsion. The gravitational attraction is mediated by an exchange of gravitons, while the electrostatic repulsion is mediated by the exchange of photons.

A “fifth force” would be something that acts in much the same way: as some function of distance, it would contribute some amount to the attraction or repulsion that any two test particles feel. It would couple to its own sort of charge as well, and thus would not, in general, behave identically to any known force (if it did behave identically, then it could never be disentangled from a known force: for example, if it behaved just like gravity then we might merely treat it as being a different species of graviton, for example).

Dark energy isn’t anything like this (neither is dark matter). It’s not, so far as we know, some new particle that is mediating some new attraction or repulsion between two test particles. Instead, it appears to be just some different stuff that is sitting around, and which interacts, through gravity, with other particles. Another way of looking at it is instead of having our two test particles have some new attraction or repulsion mediated by some new, unknown particle, we have some number of test particles that we can detect through more direct means (e.g. photons, protons, electrons) interacting with other test particles that we cannot see directly. It’s very much akin to how we know about neutrinos: neutrinos are, for all intents and purposes, completely invisible, but they are produced in weak interactions and cause weak interactions to occur. As a result, we can detect their influence by observing the behavior of test particles that we can observe.

The same is the case for dark matter and dark energy: it’s not the same stuff interacting in different ways, but new stuff that we can only observe indirectly.

Sean, if DE is a cosmological constant couldn’t you choose to absorb it into the left-hand side of Einstein’s equation rather than put it in the stress-energy tensor? Equivalently, in the Newtonian limit can’t you absorb the Lambda term into a redefinition of the potential? And if you do so, isn’t it OK to talk about it as some inherent part of the gravitational “force”, a part that is “unexplained” and a “mystery”? None of the text you quoted really claims that it’s a new fifth force.

And are you aware of some new evidence that dark energy really is “stuff” rather than a modification to gravity or something? Saying “dark energy is definitely stuff” sounds at least as misleading to me as “dark energy is an unexplained force”.

I’m totally with you about “negative pressure”, though. Even among scientists it seems to create way more misunderstanding than it should.

BG, it doesn’t matter whether you put the cosmological constant on the left-hand side or the right-hand side of the equation; it’s still the vacuum energy. The reason why the universe is accelerating might very well be a modification of general relativity, but it’s still gravity, not a new force. (Gravity didn’t stop being gravity when GR replaced Newton’s theory, for example.) In that case you should just say that dark energy doesn’t exist, which is not what they were getting at.

Sean, this is a very helpful comment and I need to be more careful about this terminology myself. A few questions for the fastidious:

1. Is it correct to say that astronomers discovered dark energy, be it stuff or force? They discovered acceleration and have attributed it to dark energy, among other possibilities that you yourself have talked about.

2. Should we be using “force” or “interaction”, the former being a special case of the latter?

3. Should we really identify the fermion/boson distinction with the stuff/force distinction? Bosons can interact with one another and fermions can agglomerate to act as bosons.

4. Leaving the domain of fundamental physics, the concept of force is more protean and frame-dependent. For example, the force of gravity that we common measure isn’t just gravitation. Indeed, “force” is a term of convenience. A phenomenologist might indeed attribute cosmic acceleration to an antigravity force.

George

Sean and others have some good points, but I don’t like the idea in glib GR talk of dismissing “forces” caused by gravitation in lieu of natural motion in space-time. That is true in some context, but really: if I have two masses separated by a rod, the rod certainly acts compressed by real forces. You can talk about what the masses would have done had the rod not been there, but since it is, the sophistic discussion of geodesics etc. doesn’t negate the fact that effectively the masses exert forces on each other for all practical purposes. Presumably th same point applies to dark energy, which is like a density with negative value for producing a field where g = (4/3)pi*rho*r. Since dark energy intrinsically and uniformly fills space (?) it could be distinguished from effects due to matter density: If I can evacuate a large region of space, I should measure a little bit of tension if masses have a string between them and the DE effect outweighs their mutual attraction. If the magnitude of DE keeps getting bigger, then indeed electrons would eventually be stripped from nuclei, and presumably even nuclei would be shredded some day in the very far future. Someone suggested that could generate energy, and brought up conservation issues (but it’s so hard to define total energy when gravity is present, true? What about the claim that space-time curvature doesn’t really have the same machinery as E&M for defining “energy” in flight as it were, for gravitational radiation? Doesn’t that cause conservation problems because we

wantto say, when rotating neutron stars etc. lose energy of motion by radiation, it is then “present” in the expanding gravitational waves?BTW, those interested in such issues could consider or reconsider the thought experiment I posed in the thread Thanksgiving about hard containers impeding space contraction, such as #3,5,6 and a final thought in #35:

There are contradiction problems if you try to imagine what happens to all the bodies in expanding/contracting universes if some things are impeded by material barriers/obstructions and other things just move like dust in free fall.BTW as I understand it, the net gravitational effect is all that matters (heh) for making space curvature, not whether it’s from matter or DE etc. Note that if space is really open and infinite, people *just* like us are having this same discussion e.g. about 10^800 light-years away, and so on ad infinitum…

George:

1) I don’t mind saying that astronomers discovered dark energy, or even that they’ve discovered the cosmological constant, depending on the level of precision for which one is aiming. The truth is that they’ve discovered the acceleration (and spatial flatness) of the universe. The best (but not only) explanation for that is dark energy, and the leading (but not only) candidate for that is the cosmological constant. But I don’t especially object to speaking as if the most likely scenario is the one you’ve found, depending on how much detail you are trying to go into. Being rigorously correct in every single statement you make is a recipe for never being understood. What I object to is being wrong for no good reason.

2) That just depends on the circumstances. Both “force” and “interaction” are perfectly good words in context.

3) The fermion/boson distinction is not

preciselythe stuff/matter distinction, as I did try to make clear. Arguably I shouldn’t even have brought it up at all, but it’s an example of a rigorous modern concept that matches pretty well with a venerable more primitive concept.4) This one I don’t understand. What do you mean by a phenomenologist? If you mean someone who sticks just to what we observe, they should be talking in terms of luminosity distances and redshifts, not in terms of forces. Anyone who speaks of forces does so in the context of a theory, and in all the theories I know of, the force responsible for the acceleration of the universe is gravity.

Sean, regarding #4: All I’m saying is that there are different levels of explanation and it’s not always convenient to talk strictly in terms of fundamental interactions. When I let go of my pencil and it hits the desk, I suppose I should be saying that the desk hit it, but FAPP I talk about a force of gravity pulling things downward. Similarly, why not talk about a force of antigravity — as astronomers did before “dark energy” entered circulation — as long as we’re clear that we don’t mean a new fundamental interaction?

George

I have no objection to attributing the fall of your pencil to the force of gravity. It’s useful, if imprecise, language. (I’ve never been a stickler about “centrifugal force,” either.)

But I would object if you dropped your pencil on the Moon, and on Mars, and into the black hole at the center of the galaxy, and then claimed that you had found a handful of new forces of nature. It’s the same force in every case — gravity. Dark energy is exactly the same. It’s a new source for gravity, just one that happens to make test bodies move apart rather than together. Calling it a new force isn’t just sloppy, it’s actively incorrect and misleading.

True, and highly nontrivial. Finding unity in variety is the heart of physics.

George

Sean — if you are considering the Higgs to be a fifth force as per the above, then I would think you would consider quintessence, if quintessence were the cause of dark energy, to also be a fifth (or rather sixth) force, no…?

Ellipsis,

Dark energy might indeed mediate an as yet unknown force. Its affect on the acceleration, however, would have nothing to do with this. So it would still be misleading to call it a force.

Pingback: Dark Energy and Me

Jason — I only brought it up because Sean mentioned that he sometimes liked to consider the Higgs a fifth force. I think if one were to take that view (which I wouldn’t myself, but agree that it’s not off the wall), then in that case quintessence (if it exists) would also fall into the additional force category, being that it’s another scalar field, in a fairly similar manner to the Higgs. Would you disagree?

If space is what is actually expanding, wouldn’t the speed of light be increasing as well? Say two galaxies are 100 lightyears apart, if the universe expanded to twice its size, it would still take light only 100 years to cross that space, because if it now takes 200 years to cross that space, that isn’t expanding space, that’s increasing distance in stable space. This raises two issues; If the speed of light did increase, how would we even know it expanded, as the increased speed of the light would require increased energy and wouldn’t this speeding up blueshift the spectrum and eliminate any redshift?

Otherwise, if it is increasing distance in stable space and all galaxies appear redshifted directly away from us, it would mean we are at the center of the universe.

Gravity curves space because light traveling across it is bent inward from what would otherwise be a straight line. As gravity collapses mass, radiation expands the constituent energy. Are we really sure the effect of all the radiation crossing space doesn’t have the opposite effect on a particular beam of light? Since it isn’t being pulled toward a gravitational vortex, it isn’t curved, but would it be redshifted, since the cause is evenly distributed? A prism bends light, but we don’t say space is curved. An optical effect would explain redshift without the Milky Way being at the center of the universe. It would also eliminate the need for dark energy. The further light traveled, the more the effect would be compounded and eventually the source would appear to be traveling away at the speed of light, which would put anything beyond that over the horizon line.

Ellipsis– Quintessence could certainly mediate another force, as I mentioned in the post. But (as Jason says) that’s not what anyone has claimed to have discovered, and it may not even exist. And in particular, the thing that is making the universe accelerate is not the 5th force mediated by the hypothetical quintessence field — it’s the energy density of the dark energy.

To call the Higgs field a fifth force might have to be qualified. Gauge fields have vector or chiral connections A on a principal bundle that under the action of the differential operator give F = dA + a/A, where F is a two-form who’s components are the fields, such as electric and magnetic fields in that case of EM. The Higgs field is an n-tuplet of scalar fields which have a quartic potential. A component of the Higgs field is absorbed (Goldstone boson) which restricts the local gauge freedom of a field. That breaks the symmetry of the gauge field, such as with the standard electroweak model.

What follows is a bit long and somewhat technical, but it indicates my thinking about some of the physics lore common today, which frankly I think is beginning to sound like the luminiferous aether of the 19th century. The term negative pressure in a sense is a matter of “culture.” It stems from the definition of the momentum-energy tensor

T^{ab} = (e + p)U^aU^b + pg^{ab},

For e energy density and p pressure. The energy density is identified by some as the zero point energy of quantum mechanics, which I will get to below, and p is a pressure associated with this ZPE. If we identify the trace of this as the cosmological constant we get / = k(e + 3p). The gradient of the energy leads to a term that vanishes for p = -e, or where we get the infamous w = -1, and experimental data computed from WMAP data w = -1.02 {+/-.12/.19}. Not bad and it appears as if we are on to something.

But not so fast!!! The condition that w = -1 is a local condition, whether this pertains globally is unclear. There is a little business of Killing vectors which project onto momentum vectors p^a = mU^a, where U^a is a vector along the geodesic flow lines, so that K_ap^a = constant. A Killing vector exist along a direction if there is a constancy defined along that variable. In a stationary metric we do have a Killing vector K_t, which when projected on a momentum four-vector give K_tE = constant. This pertains for a black hole, but not a cosmology. The metrics for cosmologies are time dependent and so there is not a defined timelike Killing vector. So what does p = -e or w = -1 buy you? It is a thermodynamic-like equation of state which constrains energy to momentum locally. Cosmologies will permit spacelike or momentum Killing vectors, and this equation applies locally. But we have no theorem which tells us whether this applies globally, or throughout the entire cosmology.

There is also the issue of the ZPE. It would appear that in our modern world we have something analogous to the aether, and it is a basic problem in physics today. This is the quantum vacuum, or zero point energy. This emerges as a consquence of quantization. The Hamiltonian for the classical harmonic oscillator is

H = 1/2m(p^2 + m^2 omega^2 q^2)

for omega the frequency of motion for the harmonic oscillator of mass m. The standard quantization procedure is to construct the operators

a = sqrt{hbar/2m omega}(m omega q + ip),

a^* = sqrt{hbar/2m omega}(m omega q – ip),

where * = dagger, which gives the Hamiltonian for the quantum system

H = hbar omega/2(a^*a + aa^*).

By the addition of hbar omega/2(a^*a – a^*a) = 0 this may be converted to the form

H = hbar omega(a^*a + 1/2).

The energy eigen-states for this operators are a ladder |n>, n = 0, 1, 2,… infinity, where for the zero state the expectation of the Hamiltonian on this vacuum has an energy hbar omega/2. The harmonic oscillator is used as the solution to many quantum wave problems, where a field in space or spacetime is modelled as a harmonic oscillator at every point. The index n corresponds to the number of particles in the system. For n = 0 the quantum system is a vacuum. Since relativistic fields occur for a range of momenta p , where each has its operators a(p), a^*(p), the vacuum state then has a zero point energy (ZPE) given by a

E = sum hbar/2 int_0^infinity d omega(p).

The vacuum then in principle has an infinite amount of energy.

Often this ZPE is dropped since it contributes nothing to most physics. One way this is done is to say that we can commute the operator a past a^* to drop the term. This sounds a bit like cheating, but since the ZPE contributes nothing to physics this move causes no “harm.” We might think back to the classical harmonic oscillator and write it as

H = 1/2m(p^2~+~m omega^2 q^2) + i omega/2(qp – pq).

The imaginary term might be objected to since classical mechanics involves real variables, but this term is classically zero. So this Hamiltonian is identical to the standard Hamiltonian, and we may then quantize this Hamiltonian to obtain H =hbar omegaa^*a, and the hbar omega/2 term is removed. The ZPE has been eliminated by treating the classical Hamiltonian in a particular way. This tends to show that abolute energy values are not important, relative energy differences are. Each E_n = hbar omega n energy eigenvalue is relative to the vacuum energy, which we may set to zero. By the same token a voltage is measured across a component, never at a point on a circuit. So the ZPE can be seen are mostly an artifact of quanitzation, and its physical contribution negligable or nonexistent.

One might then ponder what happens with gravity? After all if this ZPE turns out to exist then it should have a gravity component. We might think of there being gravitons which couple to the vacuum, and at least in a linearized sense there are these quantized spin-2 pp-waves which couple to loops and other stuff common to the perturbative theory in QED. Maybe, but the problem is that if one naively takes the ZPE stuff an computes it contribution to the cosmological constant, and it is huge. Particularly if we assume that quantum gravity fluctuations near the Planck scale are &R ~ 1/L_p^2 this curvature is ~ 10^{68}cm^{-2} and the expected cosmological constant is 123 orders of magnitude from what it is currently estimated to be. Getting rid of the ZPE as above might be convenient if the cosmological constant were indeed zero, but we are finding that it is not zero. It is small, but not zero. This muddies up the story considerably. I believe it was Lord Kelvin who commented that everything was figured out, except that annoying problem of blackbody radiation and the vanishing of the aether by Michelson & Morely. Might it be that we are bumping into certain limits of what current physics can deliver?

I will make this as brief as possible. The Hamiltonian constraint in ADM relativity H = 0 in the canonical quantization leads to Hpsi[g] = 0, which is similar to a Schrodinger equation. If you add a harmonic oscillator field to this system it has eigenvalues which give a stationary phase condition. Then we have that there is a Schrodinger equation

iK_tPsi[g] = HPsi[g], i = sqrt{-1}

for K_t = &/&t. So we have “manufactured” a time here by placing a scalar field on the spacetime manfolds. If the classical spacetimes (M, g) that are the configuration variables for the wave function(al) are stationary then K_t is a Killing vector. It turns out that not all configuration variables are classical spacetimes, but that I will forego for now. Now if you have an entanglement of wave functionals over different metrics then do we have a global K_t? No! And if we did it amounts to a violation of general covariance of general relativity, for it is a coordinate dependency on spaces. So we have a bit of a conundrum here in defining time and energy in quantum gravity as well as a problem of defining a global definition of energy in cosmology. These “breakdowns” are linked by a general principle.

Lawrence B. Crowell

The speed of light is a local thing. In your lab, which is in a flat spacetime, you will always measure the same speed of light. If you watched a light beam pass through a cloud of gas near a black hole you would see its course through the cloud just as laser beams or headlamps illuminate a fog. You would find the light speed apparently slowed down. There are of course a number of ways of looking at this, in particular since time on clocks near the black hole are slowed down you are witnessing an apparent slowing down due to the gravitational time dilation. A similar thing happens with inflationary cosmology. And indeed many distant objects in the universe we observe we could never send a signal to. They are beyond the cosmological horizon distance r = sqrt{3/ /}, in DeSitter cosmology, but their photons can reach us. It is somewhat similar to an observer in a blackhole beyond the Schwarschild radium who can see out, but can’t communicate out. For this reason the universe up to the deionization epoch we detect in the CMB is 70 billion light years away, while the timeline for the universe is 13.7 billion years. Mass-energy has been comoved away on the expanding space of the universe more than it has been moving away in the ordinary sense.

Now here is a bit of a puzzle. If the universe is 150 billion lightyears across to the CMB deionization limit, but only 13.7 billion years in age, then we are causally disconnected from anything out there — we can’t induce an effect out there, the light from our galaxy emitted now will never reach those regions. Now focus in on the early rapid inflationary model, where there was a rapid accelerated expansion of the universe. In the same way most of the universe that tunnelled out of the vacuum or vacua were quickly diconnected causally, but yet the Higgian inflaton froze out gauge restrictions or broken symmetry “everywhere.” How could that have happened?

Lawrence B. Crowell

Lawrence,

During inflation, regions which were once causally connected rapidly grow to sizes much larger than the horizon size. This explains why things today which are causally disconnected (most of the observable universe) are still correlated: once they were causally connected, during the epoch of inflation.

The picture of why this happens is pretty simple: during inflation, when the universe was dominated by a nearly constant energy density, the horizon size was nearly constant. But if the universe is expanding with a nearly constant horizon size, then any linear perturbation of any size quickly grows to be larger than the horizon, and the perturbation becomes causally disconnected from itself: it becomes “frozen.”

Then, as the universe’s expansion slows down during the radiation and matter-dominated regimes, the horizon scale rapidly increases. So scales that were first generated during inflation, then expanded beyond the horizon size, re-enter the horizon.

Agreed that during inflation the horizon, if we use a DeSitter radius r = sqrt{3/ /}, is smaller as / is much larger. There are plenty of models for / = /(phi, phi-dot) for phi a Higgsian inflaton field. During this period the Higgsian field is settling into it degenerate vacua, or some minima on the “landscape.” The so called phantom energy situation of the big rip forces r to decrease until it collapses in around averything, ripping even nucleons apart. The problem still remains though, for even if some measure of causal connection is restored after inflation, it is during the period where r is much smaller than the Higgs field set the vacuum expectation for fields of broken symmetry. There is also a problem with any idea that the universe had a vacuum of constant energy.

From the momentum energy tensor

T^{ab} = (e + p)U^aU^b + pg^{ab} (e = energy density)

the covariant derivative of the energy density is simply the ordinary “gradient” or directional derivative

nabla e = e_{,a}U^a = -(e + p)U^a_{;a},

which has a curious dependency on the chart or coordinate system. So globally the covariance of this equation is either local or exists for some special circumstances. The p = -e ( or dark energy condition for w = -1) is apparent here which gives nabla e = 0. Now we can set this in a first law of thermodynamics setting with

dU = -dW + dQ

And set the “work” as pdV and the energy at equilibrium (the ZPE etc) as dQ and get

dU = -pdV + edV

and the volume may be thought of as evolving along a set of flow lines so its evolves by the geodesic equation. This is essentially another way of describing our equation for nabla e — which is not covariant, or is chart dependent. From a thermodynamic perspective the differentials are exact differentials that apply for closed systems. A cosmology in an eternal inflation is not a closed system in a strict thermodynamic sense. So these considerations are useful in a local setting, but we really have no theory which can tell us if they apply globally. Thus energy is really an ineffective concept in general relativity.

The issue is similar to difference between the standard and Killing times with the accelerated observer. This is the situation which occurs for an accelerated observer who detects a thermal bath of radiation. The spacetime on the accelerated frame is calle the Rindler space as the causal region the observe can interact with is partitioned by horizons, or a split horizon along two different null direcitons. A Minkowski metric for the Rindler space is

ds^2 = -x^2 dt^2 + g_{ij}dx^idx^j

so that on ds = 0 we can find the spatial metric

dc^2 = (g_{ij}/g_{tt})dx^idx^j, g_{ij}/g_{tt} = &_{ij}/x^2

or Fermat metric which is a Poincare half-plane when restricted to two dimensions. Take this space and conformally fold it into a Poincare disk (again restricted to two dimensions) and you have the spatial metric for a DeSitter cosmology for r = 0 to sqrt{3/ /}. The two dimensional representation of this is seen in the Escher disk of tesselated figures that pile up near the edge of the disk. In Robert Wald’s book “Black hole thermodynamics … ” he discusses the Unruh radiation according to the deviation between standard time. Under this “map” this deviation between Killing and standard times manifests itself between timelike directions for different observers in the DeSitter cosmology. This then means that the vacuum state for the universe is not unitarily equivalent everywhere, where this is tied to coordinate dependent time directions on different charts of the spacetime. This is also one reason why there is Hawking-Gibbon radiation from the event horizon at r = sqrt{3/ /}.

So we have a problem here with defining not only energy, but this appears connected with a unitary inequivalence of the vacuum in different regions of the universe, both now and during the early moments of inflation.

Nope. You only need a small region of the universe to have nearly constant energy density in the right sort of field that can drive inflation (about one horizon scale at that time, which would have been ~10^-30m or so). Any inhomogeneities that exist will be blown so far apart so rapidly that they might as well not have existed at all: all that remain are the zero-point fluctuations in the inflaton field itself. So, in the context of inflation, you don’t need to worry about widely-separated points not being causally connected: all scales that we observe, even much larger scales than we observe, were within the horizon during the epoch of inflation.

Naturally this only works if inflation lasts long enough.