Category: Science

  • From Eternity to Here: The Quest for the Ultimate Theory of Time

    You know what the world really needs? A good book about time. Google tells me there are only about one and a half million such books right now, but I think you’ll agree that one more really good one is called for.

    So I’m writing one. From Eternity to Here: The Quest for the Ultimate Theory of Time is a popular-level book on time, entropy, and their connections to cosmology, to be published by Dutton. Hopefully before the end of this year! I’ve been plugging away at it, and have shifted almost into full-time book-writing mode now. (Note to collaborators: I promise not to abandon you entirely.)

    I have my own idiosyncratic ideas about how to account for the arrow of time in cosmology, but those are going to be confined to passing mentions in the last chapter. Mostly I’ll be discussing basic ideas that most experts agree are true, or true ideas that everyone should agree on even if perhaps they don’t quite yet, or the implications of those ideas for knotty questions in cosmology. Hopefully we can at least shift the conventional wisdom a little bit.

    Naturally there is a web page with some details. Here is the tentative table of contents, although I’ve been cutting and pasting pretty vigorously, so who knows how it will end up looking once all is said and done. One thing is for sure, some of these chapter titles need sprucing up.

    1. Prologue

    Part One: Time, Experience, and the Universe

    1. The Heavy Hand of Entropy
    2. The Beginning and End of Time
    3. The Past is Present Memory

    Part Two: Einstein’s Universe

    1. Time is Personal
    2. Time is Flexible
    3. Looping Through Time

    Part Three: Distinguishing the Past from the Future

    1. Running Backwards
    2. Entropy and Disorder
    3. Information and Life
    4. Recurrent Nightmares
    5. Quantum Time

    Part Four: Natural and Unnatural Spacetimes

    1. Black Holes
    2. The Life of the Universe
    3. The Past Through Tomorrow
    4. Epilogue: From the Universe to the Kitchen
      Appendix:  Math

    If anyone out there is friends with Oprah, maybe drop her a line suggesting that this would make a good book-club choice. I hear that’s helpful when it comes to sales.

    Update: And now you can buy it.

  • Where Does the Entropy Go?

    Gravity is a weak force, which makes it extremely difficult to do actual experiments (or perform astronomical observations) that would give us any detailed, up-close-and-personal data about the behavior of quantum gravity. We should be thankful, therefore, that we’ve been able to learn as much as we have about quantum gravity (and we do know some things) just by sitting in our chairs and doing thought experiments, constrained only by the basic principles of general relativity and quantum mechanics. Undoubtedly the most prolific thought-experiment laboratories have been black holes. In particular, Hawking’s discovery that black holes radiate and have entropy has driven an enormous amount of research, and some of it has actually been productive! One of the highlights was certainly the calculation in 1996 by Strominger and Vafa, who used some tricks from string theory to actually count the number of quantum states hidden in a black hole, in a way that would have made Boltzmann proud, and come up with an answer that matched Hawking’s formula precisely.

    There are still puzzles, however, as you might guess. Foremost among them is “How does the information get out?” An increasing number of physicists believe that the evaporation of black holes conserves information, but they don’t know precisely how the details of the state which created the black hole get preserved and then encoded in the outgoing Hawking radiation.

    A lesser-known puzzle, which many people don’t even consider a puzzle, hearkens back to a 1994 paper by Stephen Hawking, Gary Horowitz, and Simon Ross. They were trying to use the particular technique called Euclidean Quantum Gravity (in which you temporarily forget that time is any different than space) to calculate rates at which different things could happen, when the stumbled across a puzzle. They calculated the entropy of black holes with electric charge, and in particular of extremal black holes — configurations where all of the energy really comes from the electric field itself, none from any purported mass that might have fallen into the black hole. And for an extremal black hole, they found an unusual answer: zero! That was a surprise, because it is not what Hawking’s original formula (entropy is proportional to area of the event horizon) should give you for such a situation.

    Most people (including, I think, the authors) believe that this result is not trustworthy, and reflects a breakdown of the particular method used, rather than a deep truth about extremal black holes. But in a field where actual data is sparse on the ground, it’s worth keeping puzzles in mind, hoping that some day they will teach you something.

    Matt Johnson, Lisa Randall and I just submitted a paper in which we revisit this puzzle. We suggest that maybe it’s not just a simple breakdown of the methods of Euclidean quantum gravity, but perhaps something interesting is going on.

    Extremal limits and black hole entropy
    Authors: Sean M. Carroll, Matthew C. Johnson, Lisa Randall

    Abstract: Taking the extremal limit of a non-extremal Reissner-Nordström black hole (by externally varying the mass or charge), the region between the inner and outer event horizons experiences an interesting fate — while this region is absent in the extremal case, it does not disappear in the extremal limit but rather approaches a patch of $AdS_2times S^2$. In other words, the approach to extremality is not continuous, as the non-extremal Reissner-Nordström solution splits into two spacetimes at extremality: an extremal black hole and a disconnected $AdS$ space. We suggest that the unusual nature of this limit may help in understanding the entropy of extremal black holes.

    Let’s unpack this a little bit. (more…)

  • Richard Feynman on Boltzmann Brains

    The Boltzmann Brain paradox is an argument against the idea that the universe around us, with its incredibly low-entropy early conditions and consequential arrow of time, is simply a statistical fluctuation within some eternal system that spends most of its time in thermal equilibrium. You can get a universe like ours that way, but you’re overwhelmingly more likely to get just a single galaxy, or a single planet, or even just a single brain — so the statistical-fluctuation idea seems to be ruled out by experiment. (With potentially profound consequences.)

    The first invocation of an argument along these lines, as far as I know, came from Sir Arthur Eddington in 1931. But it’s a fairly straightforward argument, once you grant the assumptions (although there remain critics). So I’m sure that any number of people have thought along similar lines, without making a big deal about it.

    One of those people, I just noticed, was Richard Feynman. At the end of his chapter on entropy in the Feynman Lectures on Physics, he ponders how to get an arrow of time in a universe governed by time-symmetric underlying laws.

    So far as we know, all the fundamental laws of physics, such as Newton’s equations, are reversible. Then were does irreversibility come from? It comes from order going to disorder, but we do not understand this until we know the origin of the order. Why is it that the situations we find ourselves in every day are always out of equilibrium?

    Feynman, following the same logic as Boltzmann, contemplates the possibility that we’re all just a statistical fluctuation.

    One possible explanation is the following. Look again at our box of mixed white and black molecules. Now it is possible, if we wait long enough, by sheer, grossly improbable, but possible, accident, that the distribution of molecules gets to be mostly white on one side and mostly black on the other. After that, as time goes on and accidents continue, they get more mixed up again.

    Thus one possible explanation of the high degree of order in the present-day world is that it is just a question of luck. Perhaps our universe happened to have had a fluctuation of some kind in the past, in which things got somewhat separated, and now they are running back together again. This kind of theory is not unsymmetrical, because we can ask what the separated gas looks like either a little in the future or a little in the past. In either case, we see a grey smear at the interface, because the molecules are mixing again. No matter which way we run time, the gas mixes. So this theory would say the irreversibility is just one of the accidents of life.

    But, of course, it doesn’t really suffice as an explanation for the real universe in which we live, for the same reasons that Eddington gave — the Boltzmann Brain argument.

    We would like to argue that this is not the case. Suppose we do not look at the whole box at once, but only at a piece of the box. Then, at a certain moment, suppose we discover a certain amount of order. In this little piece, white and black are separate. What should we deduce about the condition in places where we have not yet looked? If we really believe that the order arose from complete disorder by a fluctuation, we must surely take the most likely fluctuation which could produce it, and the most likely condition is not that the rest of it has also become disentangled! Therefore, from the hypothesis that the world is a fluctuation, all of the predictions are that if we look at a part of the world we have never seen before, we will find it mixed up, and not like the piece we just looked at. If our order were due to a fluctuation, we would not expect order anywhere but where we have just noticed it.

    After pointing out that we do, in fact, see order (low entropy) in new places all the time, he goes on to emphasize the cosmological origin of the Second Law and the arrow of time:

    We therefore conclude that the universe is not a fluctuation, and that the order is a memory of conditions when things started. This is not to say that we understand the logic of it. For some reason, the universe at one time had a very low entropy for its energy content, and since then the entropy has increased. So that is the way toward the future. That is the origin of all irreversibility, that is what makes the processes of growth and decay, that makes us remember the past and not the future, remember the things which are closer to that moment in history of the universe when the order was higher than now, and why we are not able to remember things where the disorder is higher than now, which we call the future.

    And he closes by noting that our understanding of the early universe will have to improve before we can answer these questions.

    This one-wayness is interrelated with the fact that the ratchet [a model irreversible system discussed earlier in the chapter] is part of the universe. It is part of the universe not only in the sense that it obeys the physical laws of the universe, but its one-way behavior is tied to the one-way behavior of the entire universe. It cannot be completely understood until the mystery of the beginnings of the history of the universe are reduced still further from speculation to scientific understanding.

    We’re still working on that.

  • Have a Thermodynamically Consistent Christmas

    The important event this Dec. 25 isn’t celebrating the birthday of Isaac Newton or other historical figures, it’s the release of The Curious Case of Benjamin Button, a David Fincher film starring Brad Pitt and based on the story by F. Scott Fitzgerald. As you all know, it’s a story based on the device of incompatible arrows of time: Benjamin is born old and ages backwards into youth (physically, not mentally), while the rest of the world behaves normally. Some have pretended that scientific interest in the movie centers on issues of aging and longevity, but of course it’s thermodynamics and entropy that take center stage. While entropy increases and the Second Law is respected in the rest of the world, Benjamin Button’s body seems to be magically decreasing in entropy. (Which does not, strictly speaking, violate the Second Law, since his body isn’t a closed system, but it sure is weird.)

    Benjamin Button

    It’s a great opportunity to address an old chestnut: why do arrows of time have to be compatible? Why can’t we imagine ever discovering another galaxy in which entropy increased toward (what we call) the past instead of the future, as in Greg Egan’s story, “The Hundred Light-Year Diary”? Or why can’t a body age backwards in time?

    First we need to decide what the hell we mean. Let’s put aside for the moment sticky questions about collapsing wave functions, and presume that the fundamental laws of physics are perfectly reversible. In that case, given the precise state of the entire universe (or any closed system) at any one moment in time, we can use those laws to determine what the state will be at any future time, or what it was at any past time. That’s just how awesome the laws of physics are. (Of course we don’t know the laws, nor the state of the entire universe, nor could we actually carry out the relevant calculation even if we did, but we’re doing thought experiments here.) We usually take that time to be the “initial” time, but in principle we could choose any time — and in the present context, when we’re worried about arrows of time pointing in different directions, there is no time that is initial for everything. So what we mean is: Why is it difficult/impossible to choose a state of the universe with the property that, as we evolve it forward in time, some parts of it have increasing entropy and some parts have decreasing entropy?

    Notice that we can choose conditions that reverse the arrow of time for some individual isolated system. Entropy counts the “typicalness” of the system’s microscopic state, from the point of view of macroscopic observers. And it tends to go up, because there are many more ways to be high-entropy than low entropy. Consider a box of gas, in which the gas molecules are (by some means) all bunched together in the middle of the box, in a low-entropy configuration. If we just let it evolve, the molecules will move around, colliding with each other and with the walls of the box, and ending up (with overwhelmingly probability) in a much higher-entropy configuration.

    box-gas-1.jpg

    It’s easy to convince ourselves that there exists some configurations from which the entropy would spontaneously go down. For example, take the state of the above box of gas at any moment after it has become high-entropy, and consider the state in which all of the molecules have exactly the same positions but precisely reversed velocities. From there, the motion of the molecules will precisely re-trace the path that they took from the previous low-entropy state. To an external observer, it will look as if the entropy is spontaneously decreasing. (Of course we know that it took a lot of work to so precisely reverse all of those velocities, and the process of doing so increased the entropy of the wider world, so the Second Law is safe.)

    box-gas-2.jpg

    But a merely reversed arrow of time is not the point; we want incompatible arrows of time. That means entropy increasing in some part of the universe while it is decreasing in others.

    At first it would seem simple enough. Take two boxes, and prepare one of them in the low entropy state with gas in the middle, and the other in the delicately constructed state with reversed velocities. (That is, the two boxes on the left side of the two figures above.) The entropy will go up in one box, and down in the other, right? That’s true, but it’s kind of trivial. We need to have systems that interact — one system can somehow communicate with the other.

    And that ruins everything, of course. Imagine we started with these two boxes, one of which had an entropy that was ready to go up and the other ready to go down. But now we introduced a tiny coupling — say, a few photons moving between the boxes, bouncing off a molecule in one before returning to the other. Certainly the interaction of Benjamin Button’s body with the rest of the world is much stronger than that. (Likewise Egan’s time-reversed galaxy, or Martin Amis’s narrator in Time’s Arrow.)

    That extra little interaction will slightly alter the velocities of the molecules with which it interacts. (Momentum is conserved, so it has no choice.) That’s no problem for the box that starts with low entropy, as there is no delicate tuning required to make the entropy go up. But it completely ruins our attempt to set up conditions in the other box so that entropy goes down. Just a tiny change in velocity will quickly propagate through the gas, as one affected molecule hits another molecule, and then they hit two more, and so on. It was necessary for all of the velocities to be very precisely aligned to make the gas miraculously conspire to decrease its entropy, and any interaction we might want to introduce will destroy the required conspiracy. The entropy in the first box will very sensibly go up, while the entropy in the other will just stay high. You can’t have incompatible arrows of time among interacting subsystems of the universe.

  • Dark Energy: No Longer a Surprise

    A bit of science news: Alexey Vikhlinin and collaborators have used observations from the Chandra X-ray satellite to uncover new evidence for dark energy. (More info here, and the paper is here.) In particular, they simply count the number of galaxy clusters with various masses at various redshifts, and compare with the predictions of models with and without dark energy. If there were no dark energy, matter would keep clustering on larger and larger scales as the universe expanded, making new clusters all the way. But if dark energy eventually takes over, the creation of new clusters begins to turn off, as the dark energy provides an extra push of expansion beneath the feet of the particles that would like to cluster together, preventing them from doing so.

    Just to guide the eye, here are plots of the number of clusters (vertical axis) as a function of their mass (horizontal axis) at two different redshift ranges — near is on top, far is at the bottom. The left plot, which fits the data, has an appreciable cosmological constant; the right one, which doesn’t, doesn’t. The graphs are a bit confusing, because dark energy affects not only the growth of structure, but also the relation ship between redshift and distance. But the point is that dark energy kills off cluster formation at late times.

    cluster-mass-functions.jpg

    You may ask the question: so? Didn’t we find dark energy ten years ago, and haven’t we confirmed its existence several times since? Yes, and yes. In a sense, this result doesn’t teach us anything we didn’t already know.

    But we should resist the temptation to become too blase about the whole thing. (Notwithstanding that I’ve been guilty myself.) On the one hand, this is a new manifestation of dark energy: a dynamical effect on the evolution of matter, rather than simply a background effect on the expansion of the universe. This is of great interest to astronomers, and should help to constrain alternatives to the now-standard picture. But on the other, more important hand, it remains astonishing that we have this preposterous model that keeps fitting the data. We shouldn’t lose our sense of wonder that we’re able to understand as much of the universe as we do, or that the reality of cosmology is so much more interesting than simple theoretical models of the past would have predicted.

    chandra-w-omegax.jpg Here is the graph from the paper showing limits on the equation-of-state parameter, w. Horizontal axis is the fraction of dark energy (about 75%, eventually I’ll have to stop using 70%), vertical axis is w (about -1, plus or minus 0.1). Looks pretty much like a cosmological constant (w=-1) from here, although there is obviously wriggle room.

  • If Aliens Decided to Destroy Humanity, Could We Blame Them?

    Friday was the opening of The Day the Earth Stood Still starring Keanu Reeves and Jennifer Connelly; it’s director Scott Derrickson’s remake of the 1951 Robert Wise classic. The previous Friday witnessed our panel discussion at Caltech about how science intersected with the film. Reviews thus far (of both the movie and the panel) have been mixed; personally, I thoroughly enjoyed the panel and thought the movie rose to the level of “pretty good.” (Lost amidst the excitement of aliens and CGI was the excellent acting in the film, including a great performance by Jaden Smith in the role of the petulant stepson.) But it could have been great.

    panel.jpg

    Derrickson refers to his own film as a “popcorn movie with interesting ideas,” and there is certainly nothing wrong with that. The original movie was extremely compelling because it managed to be gripping and suspenseful as a narrative, while also dealing with some very big ideas. In 1951 we had just entered the atomic age, the Cold War was starting, and the Space Race was about to begin (Sputnik was 1957). Moreover, radio astronomy was just taking off, and people were beginning to talk semi-seriously about the search for extraterrestrial intelligence; Fermi introduced his celebrated paradox (“Where are they?”) in 1950. The time was right to put everything together in a compelling movie.

    The threat of nuclear war hasn’t actually gone away — the chance of a nuclear weapon being used within the next decade is probably higher than it was in the 1970’s or 80’s (although perhaps not the 50’s or 60’s). But now we also have the danger of environmental catastrophe, which was alluded to in the movie. But the remake of The Day the Earth Stood Still basically sidestepped questions of international cooperation, which were crucial to the original version. The heady mix of ideas and drama that was waiting to be tapped in 1951 isn’t quite as obvious today.

    Gort
    A huge problem with a remake like this is that the 2008 movie-going audience comes with a very different set of expectations than the 1951 audience would have. We are very used to giant special-effects extravaganzas in which aliens want to destroy the earth, so the conceit itself is not sufficient to keep us interested. And there isn’t that much tension in the question of how the plot will be resolved; I hope I’m not giving away any spoilers by saying that humanity is not destroyed. We know that humanity is going to be saved (although it would be something if it weren’t), so we’re not on the edge of our seat wondering about that. There might be some tension in the particular method by which the saving is accomplished; the original did a great job on that score with the iconic robot Gort, and without giving away anything about the remake I’ll just say that I don’t think they managed to be quite as suspenseful this time.

    But there remains one form of suspense that I thought the film couldn’t have taken advantage of more than it actually did: the questions of why aliens might want to wipe us out, and whether humanity is worth saving in the first place. Judgmental aliens are a staple of science fiction, but how realistic are they?

    To put things in perspective, the universe is 14 billion years old and the Solar System is about five billion years old. Let’s be conservative and imagine that life couldn’t arise around first-generation (Pop II or Pop III) stars, since the abundance of “metals” (to an astronomer, any element heavier than hydrogen or helium) was practically nil. You need at least a second-generation star, formed in a region seeded with the important heavier elements by prior supernova explosions. But nevertheless, it’s still easy to imagine that the aliens we might eventually come into contact with come from a planet that formed life a billion or two years earlier than life began on Earth. Now, a billion years ago, we were still struggling with the whole multi-celluarity thing. So we should imagine aliens that have evolved past our current situation by an amount analogous to which we have evolved past, say, red algae.

    It’s simply impossible for us to accurately conceive what such aliens might be like. (When Jennifer Connelly’s exobiologist asks Klaatu, the alien who has assumed the shape of Keanu Reeves, what his true form is like, he quite believably replies “It would only frighten you.”) It’s completely plausible to imagine that advanced civilizations routinely leave behind their biological forms to dwell within a computer simulation or some other form of artificial substrate for consciousness. As plausible as anything else, really.

    But if they did pay us a visit, is it plausible to imagine that they would want to wipe us out? Since we have no actual experience on which to base an answer, one option is to look at our own history, as the Kathy Bates’s Secretary of Defense does in The Day the Earth Stood Still. The lesson is not cheerful: more powerful civilizations tend to either subjugate less powerful ones, or wipe them out entirely. Okay, you say, but any civilization that is capable of traveling interstellar distances must have figured out how to live peacefully, right?

    Maybe. The problem is, it wouldn’t be a clash of civilizations; more likely, from the aliens’s perspective it would be like the clash of an annoyed homeowner dealing with mildew, or perhaps an infestation of cockroaches if we’re feeling generous. Turning again to experience, human beings are right now causing one of the great mass extinctions in the history of the planet. We could stop killing off other species, but we find that it would slightly cramp our lifestyle to do so, and we decide not to make that sacrifice. True, when we send spaceships to Mars and elsewhere, we are very careful to take steps to ensure that we don’t contaminate any traces of life that might be clinging to the other planet. But clearly, that’s not because we place great value on the continued existence of any one species. Rather, it’s because (to us) any kind of life on another planet would be incredibly unique and interesting. But there’s no reason to believe that we would be all that unique from the perspective of a galaxy-weary alien civilization. They may well have bumped into millions of worlds featuring all sorts of life. If we’re lucky, they might give us the respect that a human being would show an ant colony or a swarm of bees. If we’re lucky.

    This is an area in which science fiction, for all its vaunted imagination, is traditionally quite conservative. With some notable exceptions, we tend to assume that the forms life can take are neatly divided into “intelligent species” and “everyone else,” and we are snugly in the former category, and all intelligent species are roughly equally intelligent and it’s just a matter of time before we get our own seat in the Galactic Parliament. Although SF offers a unique opportunity to examine the way we live as humans in comparison to different ways we might live, the usual answer it gives is that the way we’re living now is pretty much the best we can imagine — alien lifestyles are much more often portrayed as profoundly lacking in some crucial feature of individuality or passion than they are as a real improvement over our current messy situation. We are special because we love our children, or because we are plucky and have so much room for improvement. We voted for Obama, after all. I bet there aren’t many alien civilizations that would have done that!

    So basically, I’m suggesting that this is a film that would have been improved by the addition of a few imaginative philosophical debates. You don’t want to be didactic or tiresome, but those are not necessary qualities of a discussion of deep ideas. If the ideas are interesting enough, they might even improve your box office.

  • Ripples in the Aether

    Prior to Einstein, physicists believed that light waves, like water waves, were ripples in a medium: instead of the ocean, they posited the existence of the luminiferious aether, some form of substance which supported the propagation of electromagnetic waves. If that idea had been true, one would imagine there would be a unique frame of reference in which the aether was at rest, while it was moving in other frames; consequently, the speed of light would depend on one’s motion through the aether. This idea was basically scotched by the Michelson-Morley experiment, which showed that the speed of light was unaffected by the motion of the Earth around the Sun. The idea was eventually superseded by special relativity, although (as with most interesting ideas) some adherents gave up only reluctantly. Indeed, if you had asked Hendrik Antoon Lorentz himself about the meaning of the famous Lorentz transformations he invented, he would not have said “they relate physical quantities measured in different inertial frames”; he would have said “they relate quantities as measured in some moving reference frame to their true values in the rest frame of the aether.”

    We know a lot more about field theory as well as about relativity these days, so we don’t need to invoke a concept like the aether to explain the propagation of light, and the idea that there is no special preferred frame of rest has been experimentally tested to exquisite precision. But precision, even when exquisite, is never absolute, and important discoveries are often lurking in the margins. So it’s interesting to contemplate the possibility that there really is some kind of field in the universe that defines an absolute standard of rest, within the modern context of low-energy effective field theories. Instead of a light-carrying medium, we are interested in the possibility of a Lorentz-violating vector field — some four-dimensional vector that has a fixed non-zero length and points in some direction at every event in spacetime. But the name “aether” is too good to abandon, so we’ve re-purposed it for modern use.

    A lot of work has gone into exploring the possible consequences and experimental constraints on the idea of an aether field pervading the universe (see reviews by Ted Jacobson or David Mattingly, or Alan Kostelecky’s web page). But the ideas are still relatively new, and there are still questions about whether such models are fundamentally well-defined. Tim Dulaney, Moira Gresham, Heywood Tam and I have been thinking about these issues for a while, and we’ve just come out with two papers presenting what we’ve worked out. Here is the first one:

    Instabilities in the Aether
    Authors: Sean M. Carroll, Timothy R. Dulaney, Moira I. Gresham, Heywood Tam

    Abstract: We investigate the stability of theories in which Lorentz invariance is spontaneously broken by fixed-norm vector “aether” fields. Models with generic kinetic terms are plagued either by ghosts or by tachyons, and are therefore physically unacceptable. There are precisely three kinetic terms that are not manifestly unstable: a sigma model $(partial_mu A_nu)^2$, the Maxwell Lagrangian $F_{munu}F^{munu}$, and a scalar Lagrangian $(partial_mu A^mu)^2$. The timelike sigma-model case is well-defined and stable when the vector norm is fixed by a constraint; however, when it is determined by minimizing a potential there is necessarily a tachyonic ghost, and therefore an instability. In the Maxwell and scalar cases, the Hamiltonian is unbounded below, but at the level of perturbation theory there are fewer degrees of freedom and the models are stable. However, in these two theories there are obstacles to smooth evolution for certain choices of initial data.

    As the title says, here we’re investigating whether aether theories are stable. That is, when you have the vector field in what you think should be the “vacuum” state, with all of the vectors aligned and nothing jiggling around, can a small perturbation lead to some sort of runaway growth, or would it just oscillate peacefully? If you do get runaway behavior, the theory is unstable, which is bad news for thinking of the theory as a sensible starting point for experimental tests. This is one of the first questions you should ask about any theory, and it’s been investigated quite a bit in the case of aether. But there is a subtlety: because you have violated Lorentz invariance, it’s not enough to check stability in the aether rest frame, you need to do it in every frame. (A perturbation caused by a source moving rapidly in a rocket ship is still a legitimate perturbation.) What we found was that almost all aether theories are unstable in some frame or another. There are just three exceptions, which we called the “sigma model” theory, the “Maxwell” theory, and the “scalar” theory.

    You might ask, what is this talk about “theories”? Why is there more than one theory? For a vector field, it turns out that there are a number of different quantities you can define (three, to be precise) that might play the role of a “kinetic energy.” So we study a three-dimensional parameter space of theories, corresponding to any possible mixture of those three quantities. The three theories we pick out as stable are simply three specific mixtures of the different kinds of kinetic energy. The Maxwell theory is very similar to ordinary electromagnetism, while the scalar theory more closely resembles a scalar field than a vector field.

    The other theory is actually our favorite, as both the Maxwell and scalar cases seem to have potential lurking pathologies that we can’t completely get rid of (although the situation is a bit murky). So we wrote a shorter paper examining the empirical behavior and constraints on that model:

    Sigma-Model Aether
    Authors: Sean M. Carroll, Timothy R. Dulaney, Moira I. Gresham, Heywood Tam

    Abstract: Theories of low-energy Lorentz violation by a fixed-norm “aether” vector field with two-derivative kinetic terms have a globally bounded Hamiltonian and are perturbatively stable only if the vector is timelike and the kinetic term in the action takes the form of a sigma model. Here we investigate the phenomenological properties of this theory. We first consider the propagation of modes in the presence of gravity, and show that there is a unique choice of curvature coupling that leads to a theory without superluminal modes. Experimental constraints on this theory come from a number of sources, and we examine bounds in a two-dimensional parameter space. We then consider the cosmological evolution of the aether, arguing that the vector will naturally evolve to be orthogonal to constant-density hypersurfaces in a Friedmann-Robertson-Walker cosmology. Finally, we examine cosmological evolution in the presence of an extra compact dimension of space, concluding that a vector can maintain a constant projection along the extra dimension in an expanding universe only when the expansion is exponential.

    Even this theory, as interesting as it is, is plagued by a problem. In the spirit of low-energy phenomenology, we basically fix the length of the vector field by hand. But we recognize that in a more complete description, there is probably some potential energy that gets minimized when the vector takes on that value. But if you allow for any variation whatsoever in the length of the vector, you are immediately confronted with a dramatic instability once more.

    So, to be honest, there are no aether theories that we can guarantee are perfectly well-behaved, even as low-energy effective theories. (All the problems we identify exist at arbitrarily low energies, and don’t rely on the short-distance behavior of the models.) The three theories to which we gave names are problematic but not manifestly unstable, so it will be worth further investigation to see if they can be patched up and made respectable.

  • No Dyson Spheres Found Yet

    dyson sphere In 1960, Freeman Dyson proposed an audacious form that future technology might take: the Dyson Sphere. It’s a simple idea, once you stop thinking in terms of “I wonder how that could be done?” and start thinking along the lines of “I wonder what is physically possible?” Dyson reasoned that an efficient civilization wouldn’t want all of the valuable energy from its home star to fly uselessly into outer space, so they would try to capture it. The solution is then obvious: a sphere of matter that encircles the entire star. It’s worth quoting a bit from Dyson’s original paper:

    The material factors which ultimately limit the expansion of a technically advanced species are the supply of matter and the supply of energy. At present the material resources being exploited by the human species are roughly limited to the biosphere of the earth, a mass of the order of 5 x 1019 grams. Our present energy supply may be generously estimated at 1020 ergs per second. The quantities of matter and energy which might conceivably become accessible to us within the solar system are 2 x 1030 grams (the mass of Jupiter) and 4 x 1033 ergs per second (the total energy output of the sun).

    The reader may well ask in what sense can anyone speak of the mass of Jupiter or the total radiation from the sun as being accessible to exploitation. The following argument is intended to show that an exploitation of this magnitude is not absurd. First of all, the time required for an expansion of population and industry by a factor of 1012 is quite short, say 3000 years if an average growth rate of 1 percent per year is maintained. Second, the energy required to disassemble and rearrange a planet the size of Jupiter is about 1044 ergs, equal to the energy radiated by the sun in 800 years. Third, the mass of Jupiter, if distributed in a spherical shell revolving around the sun at twice the Earth’s distance from it, would have a thickness such that the mass is 200 grams per square centimeter of surface area (2 to 3 meters, depending on the density). A shell of this thickness could be made comfortably habitable, and could contain all the machinery required for exploiting the solar radiation falling onto it from the inside.

    Old news, right. What I hadn’t realized is that there is something called the Fermilab Dyson Sphere search program, led by Richard Carrigan, which recently updated its results (summarized in the title of this post). A star like the Sun radiates something pretty close to a blackbody spectrum; but if you capture all of the energy in the Sun’s radiation, and then re-radiate it from a much larger sphere (e.g. one astronomical unit in radius), it comes out at a much lower temperature — a few hundred Kelvin. Dyson therefore proposed a search strategy, looking for blackbody objects radiating in the far infrared, around 10 microns in wavelength.

    And the search is now going on! Indeed, Carrigan’s most recent results were just released on astro-ph a few weeks ago:

    IRAS-based whole-sky upper limit on Dyson Spheres
    Authors: Richard A. Carrigan Jr

    Abstract: A Dyson Sphere is a hypothetical construct of a star purposely cloaked by a thick swarm of broken-up planetary material to better utilize all of the stellar energy. A clean Dyson Sphere identification would give a significant signature for intelligence at work. A search for Dyson Spheres has been carried out using the 250,000 source database of the IRAS infrared satellite which covered 96% of the sky. The search has used the Calgary data collection of the IRAS Low Resolution Spectrometer (LRS) to look for fits to blackbody spectra. Searches have been conducted for both pure (fully cloaked) and partial Dyson Spheres in the blackbody temperature region 100 < T < 600 deg K. Other stellar signatures that resemble a Dyson Sphere are reviewed. When these signatures are used to eliminate sources that mimic Dyson Spheres very few candidates remain and even these are ambiguous. Upper limits are presented for both pure and partial Dyson Spheres. The sensitivity of the LRS was enough to find solar-sized Dyson Spheres out to 300 pc, a reach that encompasses a million solar- type stars.

    It’s too bad the search has thus far not turned up too many promising candidates. The Fermi Paradox continues to be paradoxical.

    One famous account of the first contact between an extraterrestrial civilization and the human race was told in the classic 1951 Robert Wise film, The Day the Earth Stood Still. It’s now been remade by director Scott Derrickson, starring Keanu Reeves as the alien Klaatu, and will open next Friday. In the emerging spirit of science and entertainment exchanges, there will be a panel discussion at Caltech’s Beckman Auditorium this Friday (the 5th) with Derrickson and Reeves holding up the Hollywood side of things, and roboticist Joel Burdick and I holding up the science end. Don’t quote me on this, but I think it’s at 6:00, and the movie will be screened before the panel. Should be fun.

  • Thanksgiving

    This year we give thanks for the spin-statistics theorem. (Previously we gave thanks for the Lagrangian of the Standard Model of particle physics, and for Hubble’s Law.)

    You will sometimes hear physicists explain that elementary particles come in two types: bosons, which have a spin of 0, 1, 2, or some other integer, and fermions, which have a spin of 1/2, 3/2, 5/2, or some other half-integer. That’s true, but it’s hiding what’s important and emphasizing what’s auxiliary.

    When it comes to classifying elementary particles, it’s not really the spin that’s important, it’s the statistics. And really, the word “statistics” in this context makes something deep and wonderful sound dry and technical. A boson is a particle that obeys Bose statistics: when you take two identical bosons and switch them with each other, the state you end up with is indistinguishable from the state you started with. Which only makes sense, really; if you exchange two identical particles, what else could you get? The answer is, Fermi statistics: when you take two identical fermions and switch them with each other, you get minus the state you started with. Remember that the real world is based on quantum mechanics, in which the state of a system is described by a wave function that tells you what the probability of obtaining various results for certain observations would be; when we say “minus the state you started with,” we mean that the wave function is multiplied by -1.

    This difference in “statistics” seems a bit esoteric and removed from one’s everyday life, but in fact it is arguably the most important thing in the universe. This simple difference in what happens to the state of two particles when you interchange them underlies the most blatant features of how particles behave in the macroscopic world. Think of two identical particles that are in the same quantum state: sitting in the same place, doing the same thing, right on top of each other. If those two particles are bosons, that’s cool; we can switch them and get the same state, which just makes sense. But if they’re fermions, we have a problem; the two particles are purportedly in the same state, but if we switch them (which doesn’t really do anything, as they are in the same place) the state becomes minus what it used to be — seemingly a contradiction.

    This seeming puzzle has a simple solution: in the real world, two identical fermions can never occupy the same quantum state! That’s the Pauli exclusion principle, and it has a simple translation into everyday English: fermions take up space. Electrons, which are fermions, can’t just be piled on top of each other as densely as we like; some of them would have to be in the same state, and that can’t happen. That’s why atoms take up a certain amount of space, which in turn is why ordinary material objects don’t simply collapse into themselves. Fermions — electrons, quarks, neutrinos, etc. — are matter particles, constituting the “stuff” of which the objects of our world are comprised.

    Bosons, on the other hand, have no problem being in the same quantum state. So they will happily pile on top of each other. This is also important to our everyday lives. Bosons — photons, gravitons, gluons, etc. — are force particles, which pile on top of each other to form the classical force fields that hold fermions together. When you see light — a classical electromagnetic wave made of photons — or are held to the ground by gravity — a classical field made of gravitons — it’s only possible because of Bose statistics.

    So the important distinction between bosons and fermions is not the “integer spin”/”half-integer spin” distinction, it’s the “pile on top of each other”/”take up space” distinction. The fact that these sets of features come hand-in-hand is the content of the spin-statistics theorem: particles that pile on have integer spins, particles that take up space have half-integer spins. Which is a deep and beautiful result that relies on the fact that nature is fundamentally quantum rather than classical, and on the topology of the group of rotations in three (or more) spatial dimensions, and on the features of relativistic field theory. None of which I’m going to explain right here, but John Baez has a fun “proof” of the theorem using ribbons which is worth checking out.

    Rather, I will just reiterate that if the fermions comprising a turkey didn’t take up space, it would hardly constitute a filling meal; and if the gravitons from the Earth didn’t pile up to form a classical field, the traditional football game really wouldn’t work at all. So for the spin-statistics theorem, we should all be thankful.

  • What if Time Really Exists?

    The Foundational Questions Institute is sponsoring an essay competition on “The Nature of Time.” Needless to say, I’m in. It’s as if they said: “Here, you keep talking about this stuff you are always talking about anyway, except that we will hold out the possibility of substantial cash prizes for doing so.” Hard to resist.

    The deadline for submitting an entry is December 1, so there’s still plenty of time (if you will), for anyone out there who is interested and looking for something to do over Thanksgiving. They are asking for essays under 5000 words, on any of various aspects of the nature of time, pitched “between the level of Scientific American and a review article in Science or Nature.” That last part turns out to be the difficult one — you’re allowed to invoke some technical concepts, and in fact the essay might seem a little thin if you kept it strictly popular, but hopefully it should be accessible to a large range of non-experts. Most entries seem to include a few judicious equations while doing their best to tell a story in words.

    All of the entries are put online here, and each comes with its own discussion forum where readers can leave comments. A departure from the usual protocols of scientific communication, but that’s a good thing. (Inevitably there is a great deal of chaff along with the wheat among the submitted essays, but that’s the price you pay.) What is more, in addition to a judging by a jury of experts, there is also a community vote, which comes with its own prizes. So feel free to drop by and vote for mine if you like — or vote for someone else’s if you think it’s better. There’s some good stuff there.

    time-flies-clock-10-11-2006.gifMy essay is called “What if Time Really Exists?” A lot of people who think about time tend to emerge from their contemplations and declare that time is just an illusion, or (in modern guise) some sort of semi-classical approximation. And that might very well be true. But it also might not be true; from our experiences with duality in string theory, we have explicit examples of models of quantum gravity which are equivalent to conventional quantum-mechanical systems obeying the time-dependent Schrödinger equation with the time parameter right there where Schrödinger put it.

    And from that humble beginning — maybe ordinary quantum mechanics is right, and there exists a formulation of the theory of everything that takes the form of a time-independent Hamiltonian acting on a time-dependent quantum state defined in some Hilbert space — you can actually reach some sweeping conclusions. The fulcrum, of course, is the observed arrow of time in our local universe. When thinking about the low-entropy conditions near the Big Bang, we tend to get caught up in the fact that the Bang is a singularity, forming a boundary to spacetime in classical general relativity. But classical general relativity is not right, and it’s perfectly plausible (although far from inevitable) that there was something before the Bang. If the universe really did come into existence out of nothing 14 billion years ago, we can at least imagine that there was something special about that event, and there is some deep reason for the entropy to have been so low. But if the ordinary rules of quantum mechanics are obeyed, there is no such thing as the “beginning of time”; the Big Bang would just be a transitional stage, for which our current theories don’t provide an adequate spacetime interpretation. In that case, the observed arrow of time in our local universe has to arise dynamically according to the laws of physics governing the evolution of a wave function for all eternity.

    Interestingly, that has important implications. If the quantum state evolves in a finite-dimensional Hilbert space, it evolves ergodically through a torus of phases, and will exhibit all of the usual problems of Boltzmann brains and the like (as Dyson, Kleban, and Susskind have emphasized). So, at the very least, the Hilbert space (under these assumptions) must be infinite-dimensional. In fact you can go a bit farther than that, and argue that the spectrum of energy eigenvalues must be arbitrarily closely spaced — there must be at least one accumulation point.

    Sexy, I know. The remarkable thing is that you can say anything at all about the Hilbert space of the universe just by making a few simple assumptions and observing that eggs always turn into omelets, never the other way around. Turning it into a respectable cosmological model with an explicit spacetime interpretation is, admittedly, more work, and all we have at the moment are some very speculative ideas. But in the course of the essay I got to name-check Parmenides, Heraclitus, Lucretius, Augustine, and Nietzsche, so overall it was well worth the effort.