Time

Everyone’s a Critic

I got this letter in the mail the other day:

I Don’t know if you Exist But I Do! I bo not Agree with your Articl and I Do not Beleave that “MOMBO-JOMBO” if you do … Well! it’s Disturbing thought But I know How to Deal with it! I will Not let the Wolb Disiper under My Nose But if you Do I cant say I’m sorry!

Sincerely

a ten year old who knows a little more than some Pepeol!

George Wing

ps. some peopl Have a little to Much time.

In response, of course, to the NYT story about Boltzmann’s Brain. George’s father Michael, a high-school science teacher, was moved to send it along (and gave me permission to post it), suggesting that “maybe it is really a Boltzmann brain speaking.”

To which I can only respond: awesome. A fourth-grader reads an article in the Science Times, and is so moved by outrage that he pens a stern missive to the scientists quoted? It’s not very often that you have a chance to inspire a young mind like that, even if you do inspire him to berate you.

Of course, George did fall into a slight trap with respect to the logic underlying the article. But that’s okay — he’s only ten years old, and there are plenty of grownups with Ph.D.’s in physics who fell into the same trap! The trap is to imagine, despite explicit disclaimers to the contrary, that the Boltzmann’s Brain argument goes something like this:

Certain cosmological scenarios predict that it’s more likely for a brain like yours or mine to arise as a random fluctuation, rather than through orderly evolution.

Isn’t that cool????

That’s really not the argument that anyone is trying to make. Rather, it goes like this:

Certain cosmological scenarios predict that it’s more likely for a brain like yours or mine to arise as a random fluctuation, rather than through orderly evolution.

Our brains aren’t like that.

Therefore, those scenarios are not correct.

It’s kind of an old-fashioned argument. Take a theory, use it to make a prediction, the prediction isn’t correct, and therefore the theory has been falsified! Rubs a lot of people the wrong way, but it works for me.

Other critics are uncharitable for different reasons. For example Don Walton, founder and president of Time For Truth Ministeries:

I believe the accusation leveled against the Apostle Paul by Festus in Acts 26:24 — “much learning is making you mad” — is most apropos for today’s cosmologists.

Hey, question my existence and suggest that I have too much time on my hands, fine — I can deal with that. But comparing me to Saint Paul? That is a low blow, sir. And somewhat unprecedented.

When you’re ten years old, you don’t have to be right — you should be curious and passionate, and George definitely is on the right track. I look forward to recruiting him to grad school some day. For the grownups I have less hope.

Everyone’s a Critic Read More »

66 Comments

Boltzmann’s Universe

Boltzmann’s Brain CV readers, ahead of the curve as usual, are well aware of the notion of Boltzmann’s Brains — see e.g. here, here, and even the original paper here. Now Dennis Overbye has brought the idea to the hoi polloi by way of the New York Times. It’s a good article, but I wanted to emphasize something Dennis says quite explicitly, but (from experience) I know that people tend to jump right past in their enthusiasm:

Nobody in the field believes that this is the way things really work, however.

The point about Boltzmann’s Brains is not that they are a fascinating prediction of an exciting new picture of the multiverse. On the contrary, the point is that they constitute a reductio ad absurdum that is meant to show the silliness of a certain kind of cosmology — one in which the low-entropy universe we see is a statistical fluctuation around an equilibrium state of maximal entropy. According to this argument, in such a universe you would see every kind of statistical fluctuation, and small fluctuations in entropy would be enormously more frequent than large fluctuations. Our universe is a very large fluctuation (see previous post!) but a single brain would only require a relatively small fluctuation. In the set of all such fluctuations, some brains would be embedded in universes like ours, but an enormously larger number would be all by themselves. This theory, therefore, predicts that a typical conscious observer is overwhelmingly likely to be such a brain. But we (or at least I, not sure about you) are not individual Boltzmann brains. So the prediction has been falsified, and that kind of theory is not true. (For arguments along these lines, see papers by Dyson, Kleban, and Susskind, or Albrecht and Sorbo.)

I tend to find this kind of argument fairly persuasive. But the bit about “a typical observer” does raise red flags. In fact, folks like Hartle and Srednicki have explicitly argued that the assumption of our own “typicality” is completely unwarranted. Imagine, they say, two theories of life in the universe, which are basically indistinguishable, except that in one theory there is no life on Jupiter and in the other theory the Jovian atmosphere is inhabited by six trillion intelligent floating Saganite organisms.

Boltzmann’s Universe Read More »

100 Comments

Arrow of Time FAQ

The arrow of time is hot, baby. I talk about it incessantly, of course, but the buzz is growing. There was a conference in New York, and subtle pulses are chasing around the lower levels of the science-media establishment, preparatory to a full-blown explosion into popular consciousness. I’ve been ahead of my time, as usual.

So, notwithstanding the fact that I’ve disquisitioned about this a great length and considerable frequency, I thought it would be useful to collect the salient points into a single FAQ. My interest is less in pushing my own favorite answers to these questions, so much as setting out the problem that physicists and cosmologists are going to have to somehow address if they want to say they understand how the universe works. (I will stick to more or less conventional physics throughout, even if not everything I say is accepted by everyone. That’s just because they haven’t thought things through.)

Without further ado:

What is the arrow of time?

The past is different from the future. One of the most obvious features of the macroscopic world is irreversibility: heat doesn’t flow spontaneously from cold objects to hot ones, we can turn eggs into omelets but not omelets into eggs, ice cubes melt in warm water but glasses of water don’t spontaneously give rise to ice cubes. These irreversibilities are summarized by the Second Law of Thermodynamics: the entropy of a closed system will (practically) never decrease into the future.

But entropy decreases all the time; we can freeze water to make ice cubes, after all.

Not all systems are closed. The Second Law doesn’t forbid decreases in entropy in open systems, nor is it in any way incompatible with evolution or complexity or any such thing.

So what’s the big deal?

In contrast to the macroscopic universe, the microscopic laws of physics that purportedly underlie its behavior are perfectly reversible. (More rigorously, for every allowed process there exists a time-reversed process that is also allowed, obtained by switching parity and exchanging particles for antiparticles — the CPT Theorem.) The puzzle is to reconcile microscopic reversibility with macroscopic irreversibility.

And how do we reconcile them?

The observed macroscopic irreversibility is not a consequence of the fundamental laws of physics, it’s a consequence of the particular configuration in which the universe finds itself. In particular, the unusual low-entropy conditions in the very early universe, near the Big Bang. Understanding the arrow of time is a matter of understanding the origin of the universe.

Wasn’t this all figured out over a century ago?

Not exactly. In the late 19th century, Boltzmann and Gibbs figured out what entropy really is: it’s a measure of the number of individual microscopic states that are macroscopically indistinguishable. An omelet is higher entropy than an egg because there are more ways to re-arrange its atoms while keeping it indisputably an omelet, than there are for the egg. That provides half of the explanation for the Second Law: entropy tends to increase because there are more ways to be high entropy than low entropy. The other half of the question still remains: why was the entropy ever low in the first place?

Is the origin of the Second Law really cosmological? We never talked about the early universe back when I took thermodynamics.

Trust me, it is. Of course you don’t need to appeal to cosmology to use the Second Law, or even to “derive” it under some reasonable-sounding assumptions. However, those reasonable-sounding assumptions are typically not true of the real world. Using only time-symmetric laws of physics, you can’t derive time-asymmetric macroscopic behavior (as pointed out in the “reversibility objections” of Lohschmidt and Zermelo back in the time of Boltzmann and Gibbs); every trajectory is precisely as likely as its time-reverse, so there can’t be any overall preference for one direction of time over the other. The usual “derivations” of the second law, if taken at face value, could equally well be used to predict that the entropy must be higher in the past — an inevitable answer, if one has recourse only to reversible dynamics. But the entropy was lower in the past, and to understand that empirical feature of the universe we have to think about cosmology.

Does inflation explain the low entropy of the early universe?

Not by itself, no. To get inflation to start requires even lower-entropy initial conditions than those implied by the conventional Big Bang model. Inflation just makes the problem harder.

Does that mean that inflation is wrong?

Not necessarily. Inflation is an attractive mechanism for generating primordial cosmological perturbations, and provides a way to dynamically create a huge number of particles from a small region of space. The question is simply, why did inflation ever start? Rather than removing the need for a sensible theory of initial conditions, inflation makes the need even more urgent.

Arrow of Time FAQ Read More »

161 Comments

Against Bounces

bigbangbouncegold.jpg Against the languor of the Independence Day weekend, a tiny bit of media attention has managed to focus itself on a new paper by Martin Bojowald. (The paper doesn’t seem to be on the arxiv yet, but is apparently closely related to this one.) It’s about the sexy topic of “What happened before the Big Bang?” Bojowald uses some ideas from loop quantum gravity to try to resolve the initial singularity and follow the quantum state of the universe past the Bang back into a pre-existing universe.

You already know what I think about such ideas, but let me just focus in on one big problem with all such approaches (which I’ve already alluded to in a comment at Bad Astronomy, although I kind of garbled it). If you try to invent a cosmology in which you straightforwardly replace the singular Big Bang by a smooth Big Bounce continuation into a previous spacetime, you have one of two choices: either the entropy continues to decrease as we travel backwards in time through the Bang, or it changes direction and begins to increase. Sadly, neither makes any sense.

If you are imagining that the arrow of time is continuous as you travel back through the Bounce, then you are positing a very strange universe indeed on the other side. It’s one in which the infinite past has an extremely tiny entropy, which increases only very slightly as the universe collapses, so that it can come out the other side in our observed low-entropy state. That requires the state at t=-infinity state of the universe to be infinitely finely tuned, for no apparent reason. (The same holds true for the Steinhardt-Turok cyclic universe.)

On the other hand, if you imagine that the arrow of time reverses direction at the Bounce, you’ve moved your extremely-finely-tuned-for-no-good-reason condition to the Bounce itself. In models where the Big Bang is really the beginning of the universe, one could in principle imagine that some unknown law of physics makes the boundary conditions there very special, and explains the low entropy (a possibility that Roger Penrose, for example, has taken seriously). But if it’s not a boundary, why are the conditions there so special?

Someday we’ll understand how the Big Bang singularity is resolved in quantum gravity. But the real world is going to be more complicated (and more interesting) than these simple models.

Against Bounces Read More »

90 Comments

Latest Declamations about the Arrow of Time

Here are the slides from the physics colloquium I gave at UC Santa Cruz last week, entitled “Why is the Past Different from the Future? The Origin of the Universe and the Arrow of Time.” (Also in pdf.)

Time Colloquium

The real reason I’m sharing this with you is because this talk provoked one of the best responses I’ve ever received, which the provokee felt moved to share with me:

Finally, the magnitude of the entropy of the universe as a function of time is a very interesting problem for cosmology, but to suggest that a law of physics depends on it is sheer nonsense. Carroll’s statement that the second law owes its existence to cosmology is one of the dummest [sic] remarks I heard in any of our physics colloquia, apart from [redacted]’s earlier remarks about consciousness in quantum mechanics. I am astounded that physicists in the audience always listen politely to such nonsense. Afterwards, I had dinner with some graduate students who readily understood my objections, but Carroll remained adamant.

My powers of persuasion are apparently not always fully efficacious.

Also, that marvelous illustration of entropy in the bottom right of the above slide? Alan Guth’s office.

Update: Originally added as a comment, but I’m moving it up here–

The point of the “objection” is extremely simple, as is the reason why it is irrelevant. Suppose we had a thermodynamic system, described by certain macroscopic variables, not quite in equilibrium. Suppose further that we chose a random microstate compatible with the macroscopic variables (as you do, for example, in a numerical simulation). Then, following the evolution of that microstate into the future, it is overwhelmingly likely that the entropy will increase. Voila, we have “derived” the Second Law.

However, it is also overwhelmingly likely that evolving that microstate into the past will lead to an increase in entropy. Which is not true of the universe in which we live. So the above exercise, while it gets the right answer for the future, is not actually “right,” if what we care about is describing the real world. Which I do. If we want to understand the distribution function on microstates that is actually true, we need to impose a low-entropy condition in the past; there is no way to get it from purely time-symmetric assumptions.

Boltzmann’s H-theorem, while interesting and important, is even worse. It makes an assumption that is not true (molecular chaos) to reach a conclusion that is not true (the entropy is certain, not just likely, to increase toward the future — and also to the past).

The nice thing about stat mech is that almost any distribution function will work to derive the Second Law, as long as you don’t put some constraints on the future state. That’s why textbook stat mech does a perfectly good job without talking about the Big Bang. But if you want to describe why the Second Law actually works in the real world in which we actually live, cosmology inevitably comes into play.

Latest Declamations about the Arrow of Time Read More »

75 Comments

A Glimpse Into Boltzmann’s Actual Brain

You’ve heard the “Boltzmann’s Brain” argument (here and here, for example). It’s a simple idea, which is put forward as an argument against the notion that our universe is just a thermal fluctuation. If the universe is an ordinary thermodynamic system in equilibrium, there will be occasional fluctuations into low-entropy states. One of these might look like the Big Bang, and you might be tempted to conclude that such a process explains the arrow of time in our universe. But it doesn’t work, because you don’t need anything like such a huge fluctuation. There will be many smaller fluctuations that do just as well; the minimal one you might imagine would be a single brain-sized collection of particles that just has time to look around and go Aaaaaagggghhhhhhh before dissolving back into equilibrium. (These days a related argument is being thrown around in the context of eternal inflation — not exactly the same, because we’re not assuming the ensemble is in equilibrium, but similar in spirit.)

Boltzmann wasn’t the one to come up with the “brain” argument; I’m not sure who did, but I first heard it articulated clearly in a paper by Albrecht and Sorbo. It’s the maybe-our-universe-is-a-fluctuation idea that goes back to Boltzmann. Except it’s not actually his, as we can see by looking at Boltzmann’s original paper! (pdf) The reference is Nature 51, 413 (1895), as tracked down by Alex Vilenkin. Don Page copied it from a crumbling leather-bound volume in his local library, and the copy was scanned in by Andy Albrecht. The discussion is just a few paragraphs at the very end of a short paper.

I will conclude this paper with an idea of my old assistant, Dr. Schuetz.

We assume that the whole universe is, and rests for ever, in thermal equilibrium. The probability that one (only one) part of the universe is in a certain state, is the smaller the further this state is from thermal equilibrium; but this probability is greater, the greater is the universe itself. If we assume the universe great enough, we can make the probability of one relatively small part being in any given state (however far from the state of thermal equilibrium), as great as we please. We can also make the probability great that, though the whole universe is in thermal equilibrium, our world is in its present state. It may be said that the world is so far from thermal equilibrium that we cannot imagine the improbability of such a state. But can we imagine, on the other side, how small a part of the whole universe this world is? Assuming the universe great enough, the probability that such a small part of it as our world should be in its present state, is no longer small.

If this assumption were correct, our world would return more and more to thermal equilibrium; but because the whole universe is so great, it might be probable that at some future time some other world might deviate as far from thermal equilibrium as our world does at present. Then the afore-mentioned H-curve would form a representation of what takes place in the universe. The summits of the curve would represent the worlds where visible motion and life exist.

So even Boltzmann doesn’t want credit for the idea, which he attributes to his old assistant. Andy Albrecht points out that, in order to preserve the all-important alliteration, perhaps we should be calling them “Schuetz’s Schmartz.”

A Glimpse Into Boltzmann’s Actual Brain Read More »

18 Comments

How Did the Universe Start?

I’m on record as predicting that we’ll understand what happened at the Big Bang within fifty years. Not just the “Big Bang model” — the paradigm of a nearly-homogeneous universe expanding from an early hot, dense, state, which has been established beyond reasonable doubt — but the Bang itself, that moment at the very beginning. So now is as good a time as any to contemplate what we already think we do and do not understand. (Also, I’ll be talking about it Saturday night on Coast to Coast AM, so it’s good practice.)

There is something of a paradox in the way that cosmologists traditionally talk about the Big Bang. They will go to great effort to explain how the Bang was the beginning of space and time, that there is no “before” or “outside,” and that the universe was (conceivably) infinitely big the very moment it came into existence, so that the pasts of distant points in our current universe are strictly non-overlapping. All of which, of course, is pure moonshine. When they choose to be more careful, these cosmologists might say “Of course we don’t know for sure, but…” Which is true, but it’s stronger than that: the truth is, we have no good reasons to believe that those statements are actually true, and some pretty good reasons to doubt them.

I’m not saying anything avant-garde here. Just pointing out that all of these traditional statements about the Big Bang are made within the framework of classical general relativity, and we know that this framework isn’t right. Classical GR convincingly predicts the existence of singularities, and our universe seems to satisfy the appropriate conditions to imply that there is a singularity in our past. But singularities are just signs that the theory is breaking down, and has to be replaced by something better. The obvious choice for “something better” is a sensible theory of quantum gravity; but even if novel classical effects kick in to get rid of the purported singularity, we know that something must be going on other than the straightforward GR story.

There are two tacks you can take here. You can be specific, by offering a particular model of what might replace the purported singularity. Or you can be general, trying to reason via broad principles to argue about what kinds of scenarios might ultimately make sense.

Many scenarios have been put forward among the “specific” category. We have of course the “quantum cosmology” program, that tries to write down a wavefunction of the universe; the classic example is the paper by Hartle and Hawking. There have been many others, including recent investigations within loop quantum gravity. Although this program has led to some intriguing results, the silent majority or physicists seems to believe that there are too many unanswered questions about quantum gravity to take seriously any sort of head-on assault on this problem. There are conceptual puzzles: at what point does spacetime make the transition from quantum to classical? And there are technical issues: do we really think we can accurately model the universe with only a handful of degrees of freedom, crossing our fingers and hoping that unknown ultraviolet effects don’t completely change the picture? It’s certainly worth pursuing, but very few people (who are not zero-gravity tourists) think that we already understand the basic features of the wavefunction of the universe.

At a slightly less ambitious level (although still pretty darn ambitious, as things go), we have attempts to “smooth out” the singularity in some semi-classical way. Aguirre and Gratton have presented a proof by construction that such a universe is conceivable; essentially, they demonstrate how to take an inflating spacetime, cut it near the beginning, and glue it to an identical spacetime that is expanding the opposite direction of time. This can either be thought of as a universe in which the arrow of time reverses at some special midpoint, or (by identifying events on opposite sides of the cut) as a one-way spacetime with no beginning boundary. In a similar spirit, Gott and Li suggest that the universe could “create itself,” springing to life out of an endless loop of closed timelike curves. More colorfully, “an inflationary universe gives rise to baby universes, one of which turns out to be itself.”

And of course, you know that there are going to be ideas based on string theory. For a long time Veneziano and collaborators have been studying what they dub the pre-Big-Bang scenario. This takes advantage of the scale-factor duality of the stringy cosmological field equations: for every cosmological solution with a certain scale factor, there is another one with the inverse scale factor, where certain fields are evolving in the opposite direction. Taken literally, this means that very early times, when the scale factor is nominally small, are equivalent to very late times, when the scale factor is large! I’m skeptical that this duality survives to low-energy physics, but the early universe is at high energy, so maybe that’s irrelevant. A related set of ideas have been advanced by Steinhardt, Turok, and collaborators, first as the ekpyrotic scenario and later as the cyclic universe scenario. Both take advantage of branes and extra dimensions to try to follow cosmological evolution right through the purported Big Bang singularity; in the ekpyrotic case, there is a unique turnaround point, whereas in the cyclic case there are an infinite number of bounces stretching endlessly into the past and the future.

Personally, I think that the looming flaw in all of these ideas is that they take the homogeneity and isotropy of our universe too seriously. Our observable patch of space is pretty uniform on large scales, it’s true. But to simply extrapolate that smoothness infinitely far beyond what we can observe is completely unwarranted by the data. It might be true, but it might equally well be hopelessly parochial. We should certainly entertain the possibility that our observable patch is dramatically unrepresentative of the entire universe, and see where that leads us.

Landscape

How Did the Universe Start? Read More »

98 Comments

Boltzmann’s Anthropic Brain

A recent post of Jen-Luc’s reminded me of Huw Price and his work on temporal asymmetry. The problem of the arrow of time — why is the past different from the future, or equivalently, why was the entropy in the early universe so much smaller than it could have been? — has attracted physicists’ attention (although not as much as it might have) ever since Boltzmann explained the statistical origin of entropy over a hundred years ago. It’s a deceptively easy problem to state, and correspondingly difficult to address, largely because the difference between the past and the future is so deeply ingrained in our understanding of the world that it’s too easy to beg the question by somehow assuming temporal asymmetry in one’s purported explanation thereof. Price, an Australian philosopher of science, has made a specialty of uncovering the hidden assumptions in the work of numerous cosmologists on the problem. Boltzmann himself managed to avoid such pitfalls, proposing an origin for the arrow of time that did not secretly assume any sort of temporal asymmetry. He did, however, invoke the anthropic principle — probably one of the earliest examples of the use of anthropic reasoning to help explain a purportedly-finely-tuned feature of our observable universe. But Boltzmann’s anthropic explanation for the arrow of time does not, as it turns out, actually work, and it provides an interesting cautionary tale for modern physicists who are tempted to travel down that same road.

The Second Law of Thermodynamics — the entropy of a closed system will not spontaneously decrease — was understood well before Boltzmann. But it was a phenomenological statement about the behavior of gasses, lacking a deeper interpretation in terms of the microscopic behavior of matter. That’s what Boltzmann provided. Pre-Boltzmann, entropy was thought of as a measure of the uselessness of arrangements of energy. If all of the gas in a certain box happens to be located in one half of the box, we can extract useful work from it by letting it leak into the other half — that’s low entropy. If the gas is already spread uniformly throughout the box, anything we could do to it would cost us energy — that’s high entropy. The Second Law tells us that the universe is winding down to a state of maximum uselessness.

Ludwig Boltzmann Boltzmann suggested that the entropy was really counting the number of ways we could arrange the components of a system (atoms or whatever) so that it really didn’t matter. That is, the number of different microscopic states that were macroscopically indistinguishable. (If you’re worried that “indistinguishable” is in the eye of the beholder, you have every right to be, but that’s a separate puzzle.) There are far fewer ways for the molecules of air in a box to arrange themselves exclusively on one side than there are for the molecules to spread out throughout the entire volume; the entropy is therefore much higher in the latter case than the former. With this understanding, Boltzmann was able to “derive” the Second Law in a statistical sense — roughly, there are simply far more ways to be high-entropy than to be low-entropy, so it’s no surprise that low-entropy states will spontaneously evolve into high-entropy ones, but not vice-versa. (Promoting this sensible statement into a rigorous result is a lot harder than it looks, and debates about Boltzmann’s H-theorem continue merrily to this day.)

Boltzmann’s understanding led to both a deep puzzle and an unexpected consequence. The microscopic definition explained why entropy would tend to increase, but didn’t offer any insight into why it was so low in the first place. Suddenly, a thermodynamics problem became a puzzle for cosmology: why did the early universe have such a low entropy? Over and over, physicists have proposed one or another argument for why a low-entropy initial condition is somehow “natural” at early times. Of course, the definition of “early” is “low-entropy”! That is, given a change in entropy from one end of time to the other, we would always define the direction of lower entropy to be the past, and higher entropy to be the future. (Another fascinating but separate issue — the process of “remembering” involves establishing correlations that inevitably increase the entropy, so the direction of time that we remember [and therefore label “the past”] is always the lower-entropy direction.) The real puzzle is why there is such a change — why are conditions at one end of time so dramatically different from those at the other? If we do not assume temporal asymmetry a priori, it is impossible in principle to answer this question by suggesting why a certain initial condition is “natural” — without temporal aymmetry, the same condition would be equally natural at late times. Nevertheless, very smart people make this mistake over and over, leading Price to emphasize what he calls the Double Standard Principle: any purportedly natural initial condition for the universe would be equally natural as a final condition.

The unexpected consequence of Boltzmann’s microscopic definition of entropy is that the Second Law is not iron-clad — it only holds statistically. In a box filled with uniformly-distributed air molecules, random motions will occasionally (although very rarely) bring them all to one side of the box. It is a traditional undergraduate physics problem to calculate how often this is likely to happen in a typical classroom-sized box; reasurringly, the air is likely to be nice and uniform for a period much much much longer than the age of the observable universe.

Faced with the deep puzzle of why the early universe had a low entropy, Boltzmann hit on the bright idea of taking advantage of the statistical nature of the Second Law. Instead of a box of gas, think of the whole universe. Imagine that it is in thermal equilibrium, the state in which the entropy is as large as possible. By construction the entropy can’t possibly increase, but it will tend to fluctuate, every so often diminishing just a bit and then returning to its maximum. We can even calculate how likely the fluctuations are; larger downward fluctuations of the entropy are much (exponentially) less likely than smaller ones. But eventually every kind of fluctuation will happen.

Entropy Fluctuations

You can see where this is going: maybe our universe is in the midst of a fluctuation away from its typical state of equilibrium. The low entropy of the early universe, in other words, might just be a statistical accident, the kind of thing that happens every now and then. On the diagram, we are imagining that we live either at point A or point B, in the midst of the entropy evolving between a small value and its maximum. It’s worth emphasizing that A and B are utterly indistinguishable. People living in A would call the direction to the left on the diagram “the past,” since that’s the region of lower entropy; people living at B, meanwhile, would call the direction to the right “the past.”

Boltzmann’s Anthropic Brain Read More »

102 Comments

The future of the universe

This month’s provocative results on the acceleration of the universe raise an interesting issue: what can we say about our universe’s ultimate fate? In the old days (like, when I was in grad school) we were told a story that was simple, compelling, and wrong. It went like this: matter acts to slow down the expansion of the universe, and also to give it spatial curvature. If there is enough matter, space will be positively curved (like a sphere) and will eventually collapse into a Big Crunch. If there is little matter, space will be negatively curved (like a saddle) and expand forever. And if the matter content is just right, space will be flat and will just barely expand forever, slowing down all the while.

Fate of the universe This story is wrong in a couple of important ways. First and foremost, the assumption that the only important component of the universe is “matter” (or radiation, for that matter) is unduly restrictive. Now that we think that there is dark energy, the simple relation between spatial curvature and the ultimate fate of the universe is completely out the window. We can have positively curved universes that expand forever, negatively curved ones that recollapse, or what have you. (See my little article on the cosmological constant.) To determine the ultimate fate of the universe, you need to know both how much dark energy there is, and how it changes with time. (Mark has also written about this with Dragan Huterer and Glenn Starkman.)

If we take current observations at face value, and make the economical assumption that the dark energy is strictly constant in density, all indications are that the universe is going to expand forever, never to recollapse. If any of your friends go on a trip that extends beyond the Hubble radius (about ten billion light-years), kiss them goodbye, because they won’t ever be able to return — the space in between you and them will expand so quickly that they couldn’t get back to you, even if they were moving at the speed of light. Meanwhile, stars will die out and eventually collapse to black holes. The black holes will ultimately evaporate, leaving nothing in the universe but an increasingly dilute and cold gas of particles. A desolate, quiet, and lonely universe.

However, if the dark energy density actually increases with time, as it does with phantom energy, a completely new possibility presents itself: not a Big Crunch, but a Big Rip. Explored by McInnes and by Robert Caldwell, Marc Kamionkowski, and Nevin Weinberg, the Big Rip happens when the universe isn’t just accelerating, but super-accelerating — i.e., the rate of acceleration is perpetually increasing. If that happens, all hell breaks loose. The super-accelerated expansion of spacetime exerts a stretching force on all the galaxies, stars, and atoms in the universe. As it increases in strength, every bound structure in the universe is ultimately ripped apart. Eventually we hit a singularity, but a very different one than in the Big Crunch picture: rather than being squashed together, matter is torn to bits and scattered to infinity in a finite amount of time. Some observations, including the new gamma-ray-burst results, show a tiny preference for an increasing dark energy density; but given the implications of such a result, they are far from meeting the standard for convincing anyone that we’ve confidently measured any evolution of the dark energy at all.

So, it sounds like we’d like to know whether this Big Rip thing is going to happen, right? Yes, but there’s bad news: we don’t know if we’re headed for a Big Rip, and no set of cosmological observations will ever tell us. The point is, observations of the past and present are never by themselves sufficient to predict the future. That can only be done within the framework of a theory in which we have confidence. We can say that the universe will hit a Big Rip in so-and-so many years if the dark energy is increasing in density at a certain rate and we are sure that it will continue to increase at that rate. But how can we ever be sure of what the dark energy will do twenty trillion years from now? Only by actually understanding the nature of the dark energy can we extrapolate from present behavior to the distant future. In fact, it’s perfectly straightforward (and arguably more natural) for a phase of super-accelerated expansion to last for a while, before settling down to a more gently accerated phase, avoiding the Big Rip entirely. Truth is, we just don’t know. This is one of those problems that ineluctably depends on progress in both observation and theory.

The future of the universe Read More »

38 Comments
Scroll to Top