Category: Science

  • The Black Hole War

    Lenny Susskind has a new book out: The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics. At first I was horrified by the title, but upon further reflection it’s grown on me quite a bit.

    Some of you may know Susskind as a famous particle theorist, one of the early pioneers of string theory. Others may know his previous book: The Cosmic Landscape: String Theory and the Illusion of Intelligent Design. (Others may never have heard of him, although I’m sure Lenny doesn’t want to hear that.) I had mixed feelings about the first book; for one thing, I thought it was a mistake to put “Intelligent Design” there in the title, even if it were to be dubbed an “Illusion.” So when the Wall Street Journal asked me to review it, I was a little hesitant; I have enormous respect for Susskind as a physicist, but if I ended up not liking the book I would have to be honest about it. Still, I hadn’t ever written anything for the WSJ, and how often does one get the chance to stomp about in the corridors of capitalism like that?

    The good news is that I liked the book a great deal, as the review shows. I won’t reprint the thing here, as you are all well-trained when it comes to clicking on links. But let me mention just a few words about information conservation and loss, which is the theme of the book. (See Backreaction for another account.)

    It’s all really Isaac Newton’s fault, although people like Galileo and Laplace deserve some of the credit. The idea is straightforward: evolution through time, as described by the laws of physics, is simply a matter of re-arranging a fixed amount of information in different ways. The information itself is neither created nor destroyed. Put another way: to specify the state of the world requires a certain amount of data, for example the positions and velocities of each and every particle. According to classical mechanics, from that data (the “information”) and the laws of physics, we can reliably predict the precise state of the universe at every moment in the future — and retrodict the prior states of the universe at every moment in the past. Put yet another way, here is Thomasina Coverley in Tom Stoppard’s Arcadia:

    If you could stop every atom in its position and direction, and if your mind could comprehend all the actions thus suspended, then if you were really, really good at algebra you could write the formula for all the future; and although nobody can be so clever as to do it, the formula must exist just as if one could.

    This is the Clockwork Universe, and it is far from an obvious idea. Pre-Newton, in fact, it would have seemed crazy. In Aristotelian mechanics, if a moving object is not subject to a continuous impulse, it will eventually come to rest. So if we find an object at rest, we have no way of knowing whether until recently it was moving, or whether it’s been sitting there for a long time; that information is lost. Many different pasts could lead to precisely the same present; whereas, if information is conserved, each possible past leads to exactly one specific state of affairs at the present. The conservation of information — which also goes by the name of “determinism” — is a profound underpinning of the modern way we think about the universe.

    Determinism came under a bit of stress in the early 20th century when quantum mechanics burst upon the scene. In QM, sadly, we can’t predict the future with precision, even if we know the current state to arbitrary accuracy. The process of making a measurement seems to be irreducibly unpredictable; we can predict the probability of getting a particular answer, but there will always be uncertainty if we try to make certain measurements. Nevertheless, when we are not making a measurement, information is perfectly conserved in quantum mechanics: Schrodinger’s Equation allows us to predict the future quantum state from the past with absolute fidelity. This makes many of us suspicious that this whole “collapse of the wave function” that leads to an apparent loss of determinism is really just an illusion, or an approximation to some more complete dynamics — that kind of thinking leads you directly to the Many Worlds Interpretation of quantum mechanics. (For more, tune into my Bloggingheads dialogue with David Albert this upcoming Saturday.)

    In any event, aside from the measurement problem, quantum mechanics makes a firm prediction that information is conserved. Which is why it came as a shock when Stephen Hawking said that black holes could destroy information. Hawking, of course, had famously shown that black holes give off radiation, and if you wait long enough they will eventually evaporate away entirely. Few people (who are not trying to make money off of scaremongering about the LHC) doubt this story. But Hawking’s calculation, at first glance (and second), implies that the outgoing radiation into which the black hole evaporates is truly random, within the constraints of being a blackbody spectrum. Information is seemingly lost, in other words — there is no apparent way to determine what went into the black hole from what comes out.

    This led to one of those intellectual scuffles between “the general relativists” (who tended to be sympathetic to the idea that information is indeed lost) and “the particle physicists” (who were reluctant to give up on the standard rules of quantum mechanics, and figured that Hawking’s calculation must somehow be incomplete). At the heart of the matter was locality — information can’t be in two places at once, and it has to travel from place to place no faster than the speed of light. A set of reasonable-looking arguments had established that, in order for information to escape in Hawking radiation, it would have to be encoded in the radiation while it was still inside the black hole, which seemed to be cheating. But if you press hard on this idea, you have to admit that the very idea of “locality” presumes that there is something called “location,” or more specifically that there is a classical spacetime on which fields are propagating. Which is a pretty good approximation, but deep down we’re eventually going to have to appeal to some sort of quantum gravity, and it’s likely that locality is just an approximation. The thing is, most everyone figured that this approximation would be extremely good when we were talking about huge astrophysical black holes, enormously larger than the Planck length where quantum gravity was supposed to kick in.

    But apparently, no. Quantum gravity is more subtle than you might think, at least where black holes are concerned, and locality breaks down in tricky ways. Susskind himself played a central role in formulating two ideas that were crucial to the story — Black Hole Complementarity and the Holographic Principle. Which maybe I’ll write about some day, but at the moment it’s getting late. For a full account, buy the book.

    Right now, the balance has tilted quite strongly in favor of the preservation of information; score one for the particle physicists. The best evidence on their side (keeping in mind that all of the “evidence” is in the form of theoretical arguments, not experimental data) comes from Maldacena’s discovery of duality between (certain kinds of) gravitational and non-gravitational theories, the AdS/CFT correspondence. According to Maldacena, we can have a perfect equivalence between two very different-looking theories, one with gravity and one without. In the theory without gravity, there is no question that information is conserved, and therefore (the argument goes) it must also be conserved when there is gravity. Just take whatever kind of system you care about, whether it’s an evaporating black hole or something else, translate it into the non-gravitational theory, find out what it evolves into, and then translate back, with no loss of information at any step. Long story short, we still don’t really know how the information gets out, but there is a good argument that it definitely does for certain kinds of black holes, so it seems a little perverse to doubt that we’ll eventually figure out how it works for all kinds of black holes. Not an airtight argument, but at least Hawking buys it; his concession speech was reported on an old blog of mine, lo these several years ago.

  • arxiv Find: Stars in Other Universes

    Fred Adams wonders whether we could still have stars if the constants of nature were very different. Answer: very possibly! It’s in arxiv:0807.3697:

    Motivated by the possible existence of other universes, with possible variations in the laws of physics, this paper explores the parameter space of fundamental constants that allows for the existence of stars. To make this problem tractable, we develop a semi-analytical stellar structure model that allows for physical understanding of these stars with unconventional parameters, as well as a means to survey the relevant parameter space. In this work, the most important quantities that determine stellar properties — and are allowed to vary — are the gravitational constant $G$, the fine structure constant $alpha$, and a composite parameter $C$ that determines nuclear reaction rates. Working within this model, we delineate the portion of parameter space that allows for the existence of stars. Our main finding is that a sizable fraction of the parameter space (roughly one fourth) provides the values necessary for stellar objects to operate through sustained nuclear fusion. As a result, the set of parameters necessary to support stars are not particularly rare. In addition, we briefly consider the possibility that unconventional stars (e.g., black holes, dark matter stars) play the role filled by stars in our universe and constrain the allowed parameter space.

    I’ve never thought that our knowledge of what constituted “intelligent life” was anywhere near good enough to start making statements about the conditions under which it could form, apart from fairly weak stuff like “life probably can’t exist if the universe only lasts for a Planck time.” So when anthropic arguments start to hinge on thinking that fractional changes in the mass of this or that nucleus would result in a universe with no observers, it seems more prudent to admit that we just don’t know. But putting any anthropic considerations aside, it’s still interesting to ask what the universe would look like if the constants of nature were completely different. How robust are the starry skies?

  • A New CMB Anomaly?

    One of the important features of the universe around us is that, on sufficiently large scales, it looks pretty much the same in every direction — “isotropy,” in cosmology lingo. There is no preferred direction to space, in which the universe would look different than in the perpendicular directions. The most compelling evidence for large-scale isotropy comes from the Cosmic Microwave Background (CMB), the leftover radiation from the Big Bang. It’s not perfectly isotropic, of course — there are tiny fluctuations in temperature, which are pretty important; they arise from fluctuations in the density, which grow under the influence of gravity into the galaxies and clusters we see today. Here they are, as measured by the WMAP satellite.

    Nevertheless, there is a subtle way for the universe to break isotropy and have a preferred direction: if the tiny observed perturbations somehow have a different character in one direction than in others. The problem is, there are a lot of ways this could happen, and there is a huge amount of data involved with a map of the entire CMB sky. A tiny effect could be lurking there, and be hard to see; or we could see a hint of it, and it would be hard to be sure it wasn’t just a statistical fluke.

    In fact, at least three such instances of apparent large-scale anisotropies have been claimed. One is the “axis of evil” — if you look at only the temperature fluctuations on the very largest scales, they seem to be concentrated in a certain plane on the sky. Another is the giant cold spot (or “non-Gaussianity,” if you want to sound like an expert) — the Southern hemisphere seems to have a suspiciously coherent blob of slightly lower than average CMB temperature. And then there is the lopsided universe — the total size of the fluctuations on one half of the sky seems to be slightly larger than on the other half.

    All of these purported anomalies in the data, while interesting, are very far from being definitive. Although most people seem to agree that they are features of the data from WMAP, it’s hard to tell whether they are all just statistical flukes, or subtle imperfections in the satellite itself, or contamination by foregrounds (like our own galaxy), or real features of the universe.

    Now we seem to have another such anomaly, in which the temperature fluctuations in the CMB aren’t distributed perfectly isotropically across the sky. It comes by way of a new paper by Nicolaas Groeneboom and Hans Kristian Eriksen:

    Bayesian analysis of sparse anisotropic universe models and application to the 5-yr WMAP data

    Sexy title, eh? Here is the upshot: Groeneboom and Eriksen looked for what experts would call a “quadrupole pattern of statistical anisotropy.” Similar to the lopsided universe effect, where the fluctuations seem to be larger on one side of the sky than the other, this is an “elongated universe” effect — fluctuations are larger along one axis (in both directions) as compared to the perpendicular plane. Here is a representation of the kind of effect we are talking about — not easy to make out, but the fluctuations are supposed to be a bit stronger near the red dots than in the strip in between them.

    It’s not a very large signal — “3.8 sigma,” in the jargon of the trade, where 3 sigma basically means “begin to take seriously,” but you might want to get as high as 5 sigma before you say “there definitely seems to be something there.” However, the WMAP data come in different frequencies (V-band and W-band), and the effect seems to be there in both bands. Furthermore, you can look for the effect separately at large angular scales and at small angular scales, and you find it in both cases (with somewhat lower statistical significance, as you might expect). So it’s far from being a gold-plated discovery, but it doesn’t seem to be a complete fluke, either.

    Remember, looking for any specific effect is quite a project — there is a lot of data, and the analysis involves manipulating huge matrices, and you have to worry about foregrounds and instrumental effects. So why were these nice folks looking for a power asymmetry along a preferred axis in the sky? Well, you might recall my paper with Lotty Ackerman and Mark Wise, described in the “Anatomy of a Paper” series of blog posts (I, II, III). We were interested in whether the (hypothetical) period of inflation in the early universe might have been anisotropic — expanding just a bit faster in one direction than in the others — and if so, how it would show up in the CMB. What we found was that the natural expectation was a power asymmetry along the preferred axis, and gave a bunch of formulas by which observers could actually look for the effect. That is what Nicolaas and Hans Kristian did, with every expectation that they would establish an upper limit on the size of our predicted effect, which we had labelled g*. But instead, they found it! The data are saying that

    g_* = 0.15 pm 0.039,.

    So naturally, Lotty and Mark and I are brushing up on our Swedish in preparation for our upcoming invitations to Stockholm. Okay, not quite. In fact, it’s useful to be very clear about this, given the lessons that were (one hopes) learned in John’s series of posts about Higgs hunting. Namely: small, provocative “signals” such as this happen all the time. It would be completely irresponsible just to take every one of them at face value as telling you something profound about the universe. And the more surprising the result — and this one would be pretty darned surprising — the more skeptical and cautious we have every right to be.

    So what are we supposed to think? Certainly not that these guys are just jokers that don’t know how to analyze CMB data; the truth couldn’t be more different. But analyzing data like this is really hard, and other groups will doubtless jump in and do their own analyses, as it should be. It’s certainly possible that there is a small systematic effect in WMAP — “correlated noise” — rather than in the universe. The authors have considered this, of course, and it doesn’t seem to fit the finding very comfortably, but it’s a possibility. The very good news is that the kind of correlated noise one would expect from WMAP (given the pattern it used to scan across the sky) is completely different from that the we would worry about from the upcoming Planck mission, scheduled to launch next year.

    Or, of course, we could be learning something deep about the universe. Maybe even that inflation was anisotropic, as Lotty and Mark and I contemplated. Or, perhaps more plausibly, there is some single real effect in the universe that is conspiring to give us all of the tantalizing hints contained in the various anomalies listed above. We don’t know yet. That’s what makes it fun.

  • Beyond the Room

    I’m sure Ruben Bolling is making fun of people I disagree with, and not of me.

    The underlying point is a good one, though, and one that is surprisingly hard for people thinking about cosmology to take to heart: without actually looking at it, there is no sensible a priori reasoning that can lead us to reliable knowledge about parts of the universe we haven’t observed. Einstein and Wheeler believed that the universe was closed and would someday recollapse, because a universe that was finite in time felt right to them. The universe doesn’t care what feels right, or what “we just can’t imagine”; so all possibilities should remain on the table.

    On the other hand, that doesn’t mean we can’t draw reasonable a posteriori conclusions about the unobservable universe, if the stars align just right. That is, if we had a comprehensive theory of physics and cosmology that successfully passed a barrage of empirical tests here in the universe we do observe, and made unambiguous predictions for the universe that we don’t, it would not be crazy to take those predictions seriously.

    We don’t have that theory yet, but we’re working on it. (Where “we” means an extremely tiny fraction of working scientists, who receive an extremely disproportionate amount of attention.)

  • Everything You Ever Wanted to Know About Quantum Mechanics, But Were Afraid to Ask

    Sorry, not in this post, but upcoming. I’m scheduled to do another episode of Bloggingheads.tv with David Albert, and we’ve decided to spend the whole hour talking about quantum mechanics. Start with the basics, try to explain this crazy theory and some of its outlandish consequences in ways that anyone can understand, and then dig into some of the mysteries of measurement, superposition, and reality.

    So — what do you want to know? What are the really interesting questions about QM that we should be talking about?

    One thing I don’t think we science-explainers get as clear as we could is the idea of the Wave Function of the Universe. It sounds scary and/or pretentious — an older colleague of mine at MIT once said “I’m too young to talk about the wave function of the universe.” But it’s a crucial fact of quantum mechanics (arguably the crucial fact) that, unlike in classical mechanics, when you consider two electrons you don’t just have a separate state for each electron. You have a single wave function that describes the two-electron system. And that’s true for any number of particles — when you consider a bigger system, you don’t “add more wavefunctions,” you beef up your single wave function so that it describes more particles. There is only ever one wave function, and you can call it “of the universe” if you like. Deep, man.

    Here is another thing: in quantum mechanics, you can “add two states together,” or “take their average.” (Hilbert space is a vector space with an inner product.) In classical mechanics, you can’t. (Phase space is not a vector space at all.) How big a deal is that? Is there some nice way we can explain what that means in terms your grandmother could understand, even if your grandmother is not a physicist or a mathematician?

    (See also Dave Bacon’s discussion of teaching quantum mechanics as a particular version of probability theory. There are many different ways of answering the question “What is quantum mechanics?”)

  • What Good is a Theory?

    Over at Edge, they’ve posted a provocative article by Chris Anderson, editor of Wired magazine: “The End of Theory — Will the Data Deluge Makes the Scientific Method Obsolete?” We are certainly entering an age where experiments create giant datasets, often so unwieldy that we literally can’t keep it all — as David Harris notes, the LHC will be storing about 15 petabytes of data per year, which sounds like a lot, until you realize that it will be creating data at a rate of 10 petabytes per second. Clearly, new strategies are called for; in particle physics, the focus is on the “trigger” that makes quick decisions about which events to keep and which to toss away, while in astronomy or biology the focus is more on sifting through the data to find unanticipated connections. Unfortunately, Anderson takes things a bit too far, arguing that the old-fashioned scientific practice of inventing simple hypotheses and then testing them has become obsolete, and will be superseded by ever-more-sophisticated versions of data mining. I think he misses a very big point. (Gordon Watts says the same thing … as do many other people, now that I bother to look.)

    Early in the 17th century, Johannes Kepler proposed his Three Laws of Planetary Motion: planets move in ellipses, they sweep out equal areas in equal times, and their periods are proportional to the three-halves power of the semi-major axis of the ellipse. This was a major advance in the astronomical state of the art, uncovering a set of simple relations in the voluminous data on planetary motions that had been collected by his mentor Tycho Brahe.

    Later in that same century, Sir Isaac Newton proposed his theory of mechanics, including both his Laws of Motion and the Law of Universal Gravitation (the force due to gravity falls as the inverse square of the distance). Within Newton’s system, one could derive Kepler’s laws – rather than simply positing them – and much more besides. This was generally considered to be a significant step forward. Not only did we have rules of much wider-ranging applicability than Kepler’s original relations, but we could sensibly claim to understand what was going on. Understanding is a good thing, and in some sense is the primary goal of science.

    Chris Anderson seems to want to undo that. He starts with a truly important and exciting development – giant new petascale datasets that resist ordinary modes of analysis, but which we can use to uncover heretofore unexpected patterns lurking within torrents of information – and draws a dramatically unsupported conclusion – that the age of theory is over. He imagines a world in which scientists sift through giant piles of numbers, looking for cool things, and don’t bother trying to understand what it all means in terms of simple underlying principles.

    There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show.

    Well, we can do that. But, as Richard Nixon liked to say, it would be wrong. Sometimes it will be hard, or impossible, to discover simple models explaining huge collections of messy data taken from noisy, nonlinear phenomena. But it doesn’t mean we shouldn’t try. Hypotheses aren’t simply useful tools in some potentially outmoded vision of science; they are the whole point. Theory is understanding, and understanding our world is what science is all about.

  • Is the LHC Too Busy To Blog?

    It’s fascinating to read the GLAST blog, written by Steve Ritz and featuring the exploits of everyone’s favorite new gamma-ray observatory. Not that it’s perfectly transparent — it’s full of breathless exclamations along the lines of “Very early this morning the LAT and GBM flight computers were powered on and booted successfully. Later this morning, the process of turning on the LAT detectors will begin!” But you kind of get the idea, even if the acronym-heavy NASA-ese is not a model of accessibility. And so far, things are looking just great — in fact, the LAT (my guess is “Large Aperture Telescope,” and I’m too proud to look it up) just took it’s first science data! Which is indeed an event worthy of exclamation points.

    Steve is a friend of mine, and a good choice for a blogger, but I have to admit that I prefer the blogs that are by the experiments themselves, rather than the people working on them. This is a path blazed by NASA’s Opportunity Mars Rover, which had a (now sadly defunct) LiveJournal that made the Red Planet come to life: “The article also talked about my little, ahem, driving accident and implied that I am getting old and creaky — OMG so embarrassing!!! What if he read them!!”

    What about the new Phoenix Lander? There was one of those boring human-based blogs for the landing, but the craft itself doesn’t seem to have it’s own blog. That’s because Phoenix is totally ahead of the curve, and eschews the outdated blogging format in favor of a Twitter account! And, of course, a Facebook profile. Good call, Phoenix — very cutting-edge.

    So I want the Large Hadron Collider to have a blog. Humans are fine in their own way, of course, but I’d rather hear from the machine itself, or at least one of the experiments — an ATLAS or CMS blog would be fine. There is a Hardware Commissioning webpage, which makes the GLAST blog read like Dr. Seuss. (They’re cooling the thing down, and it seems to be going well.) There is also LHC Countdown, which seems less connected to facts on the ground.

    Anyway, we are entering the home stretch, and the LHC should actually be injecting protons in July or maybe August. The beam won’t be at full strength yet, and there is going to be a lot of work to shake down the detectors and get everything in working order. After that, it’s up to Nature, who will decide whether to give us some interesting physics discoveries early, or really make us work for them.

    In the meantime, a blog would help keep us up to speed. Now that we know that the LHC won’t destroy the world, it could use a media-friendly makeover. That’s all I’m saying.

  • If It’s Not Disturbing, You’re Not Doing It Right

    Science, that is. No, this is not what I have in mind. Rather, this provocative statement — the discoveries of science should be disturbing, they shouldn’t simply provide gentle reassurance about our place in the universe — is the conclusion reached by my latest Bloggingheads dialogue, with David Albert.

    .

    David is a philosopher of science at Columbia, author of Time and Chance as well as Quantum Mechanics and Experience. We talked about what philosophers of science do, the awful What the Bleep Do We Know? movie, string theory and falsifiability, and touched on time before running out thereof. Future episodes are clearly called for.

  • GLAST Just Launched!

    The Gamma-ray Large Area Space Telescope, a satellite observatory designed to — guess what? — measure gamma rays, just launched on a Delta II rocket from Cape Canaveral. There were a few last-minute radar issues, but things seem to have ultimately gone off without a hitch. There is a launch blog here (naturally), and Phil Plait has been covering the mission in detail; there was a nice article in symmetry, and they also have a live blog.

    “Vehicle performance continues to look nominal…” You have to love scientists.

    GLAST will be doing a variety of cool things, but there is one goal that stands out as uniquely exciting for physicists: it will be searching for dark-matter annihilations. If the dark matter consists of weakly interacting massive particles, they can come together and annihilate into a cascade of lighter particles. (Image from Sky & Telescope.) Among the particles produced are very high-energy photons: gamma-rays. Those are what GLAST will be looking for, a process known as “indirect dark-matter detection,” in contrast to direct detection where a dark-matter particle bumps into an experiment here on Earth.

    Of course, dark matter doesn’t annihilate very often, or it all would have gone away by now. The interactions are very infrequent, so you’re most likely to see the gamma-ray signature in areas of high dark-matter density, such as the center of our galaxy or in clusters of galaxies. (The number of annihilations goes as the density squared, so you get a lot more in crowded regions.) We can imagine a future in which dark matter is no longer considered “dark,” so long as you look in the right part of the spectrum, and we use combinations of techniques to map out the dark matter distribution throughout the universe. Cosmologically speaking, the 21st century is going to be the Dark Ages, but in a good way.

    It’s not all that easy, of course — sadly, there are other sources of gamma-rays in the universe other than dark-matter annihilations. It’s going to be a task to know for sure whether some individual source of gamma-rays is produced by DM annihilation or some more prosaic mechanism, such as an active galactic nucleus. Apparently, there are people (“astronomers”) who like to study those sources for their own sake, so it’s not a total loss. One way or the other, GLAST is going to be looking at the universe in an exciting new way.

  • The Lopsided Universe

    Here’s a new paper of mine, with Adrienne Erickcek and Mark Kamionkowski:

    A Hemispherical Power Asymmetry from Inflation

    Abstract: Measurements of temperature fluctuations by the Wilkinson Microwave Anisotropy Probe (WMAP) indicate that the fluctuation amplitude in one half of the sky differs from the amplitude in the other half. We show that such an asymmetry cannot be generated during single-field slow-roll inflation without violating constraints to the homogeneity of the Universe. In contrast, a multi-field inflationary theory, the curvaton model, can produce this power asymmetry without violating the homogeneity constraint. The mechanism requires the introduction of a large-amplitude superhorizon perturbation to the curvaton field, possibly a pre-inflationary remnant or a superhorizon curvaton-web structure. The model makes several predictions, including non-Gaussianity and modifications to the inflationary consistency relation, that will be tested with forthcoming CMB experiments.

    The goal here is to try to explain a curious feature in the cosmic microwave background that has been noted by Hans Kristian Eriksen and collaborators: it’s lopsided. We all (all my friends, anyway) have seen the pretty pictures from the WMAP satellite, showing the 1-part-in-100,000 fluctuations in the temperature of the CMB from place to place in the sky. These fluctuations are understandably a focus of a great deal of contemporary cosmological research, as (1) they arise from density perturbations that grow under the influence of gravity into galaxies and large-scale structure in the universe today, and (2) they appear to be primordial, and may have arisen from a period of inflation in the very early universe. Remarkably, from just a tiny set of parameters we can explain just about everything we observe in the universe on large scales.

    The lopsidedness I’m referring to is different from the so-called axis of evil. The latter (in a cosmological context) refers to an apparent alignment of the temperature fluctuations on very large scales, which purportedly pick out a preferred plane in the sky (suspiciously close to the plane of the ecliptic). The lopsidedness is a different effect, in which the overall amplitude of fluctuations is a bit different (just 10% or so) in one direction on the sky than in the other. (A “hemispherical power asymmetry,” if you like.)

    What we’re talking about is illustrated in these two simulations kindly provided by Hans Kristian Eriksen.

    Untilted CMB

    Tilted CMB

    I know, they look almost the same. But if you peer closely, you will see that the bottom one is the lopsided one — the overall contrast (representing temperature fluctuations) is a bit higher on the left than on the right, while in the untilted image at the top they are (statistically) equal. (The lower image exaggerates the claimed effect in the real universe by a factor of two, just to make it easier to see by eye.)

    What could cause such a thing? Our idea was that there was a “supermode” — a fluctuation that varied uniformly across the observable universe, for example if we were sampling a tiny piece of a sinusoidal fluctuation with a wavelength many times the size of our current Hubble radius.

    The blue circle is our observable universe, the green curve is the supermode, and the small red squiggles are the local fluctuations that have evolved under the influence of this mode. The point is that the universe is overall just a little bit more dense on one side than the other, so it evolves just slightly differently, and the resulting CMB looks lopsided.

    Interestingly, it doesn’t quite work; at least, not in a simple model of inflation driven by a single scalar field. In that case, you can get the power asymmetry, but there is also a substantial temperature anisotropy — the universe is hotter on one side than on the other. There are a few back-and-forth steps in the reasoning that I won’t rehearse here, but at the end of the day you get too much power on very large scales. It’s no fun being a theoretical cosmologist these days, all the data keeps ruling out your good ideas.

    But we didn’t give up! It turns out that you can make things work if you have two scalar fields — one that does the inflating, cleverly called the “inflaton,” and the other which is responsible for the density perturbations, which should obviously be called the “perturbon” but for historical reasons is actually called the “curvaton.” By decoupling the source of most of the density in the universe from the source of its perturbations, we have enough wiggle room to make a model that fits the data. But there’s not that much wiggle room, to be honest; we have an allowed region in parameter space that is not too big. That’s good news, as it brings the hope that we can make relatively precise predictions that could be tested by some means other than the CMB.

    One interesting feature of this model is that the purported supermode must have originated before the period of inflation that gave rise to the smaller-scale perturbations that we see directly in the CMB. Either it came from earlier inflation, or something entirely pre-inflationary.

    So, to make a bit of a segue here, this Wednesday I gave a plenary talk at the summer meeting of the American Astronomical Society in St. Louis. I most discussed the origin of the universe and the arrow of time — I wanted to impress upon people that the origin of the entropy gradient in our everyday environment could be traced back to the Big Bang, and that conventional ideas about inflation did not provide straightforward answers to the problem, and that the Big Bang may not have been the beginning of the universe. I was more interested in stressing that this was a problem we should all be thinking about than pushing any of my favorite answers, but I did mention my paper with Jennie Chen as an example of the kind of thing we should all be looking for.

    To an audience of astronomers, talk of baby universes tends to make people nervous, so I wanted to emphasize that (1) it was all very speculative, and (2) even though we don’t currently know how to connect ideas about the multiverse to observable phenomena, there’s no reason to think that it’s impossible in principle, and the whole enterprise really is respectable science. (If only they had all seen my bloggingheads dialogue with John Horgan, I wouldn’t have had to bother.) So I mentioned two different ideas that are currently on the market for ways in which influences of a larger multiverse might show up within our own. One is the idea of colliding bubbles, pursued by Aguirre, Johnson, and Shomer and by Chang, Kleban, and Levi. And the other, of course, was the lopsided-universe idea, since our paper had just appeared the day before. Neither of these possibilities, I was careful to say, applies directly to the arrow-of-time scenario I had just discussed; the point was just that all of these ideas are quite young and ill-formed, and we will have to do quite a bit more work before we can say for sure whether the multiverse is of any help in explaining the arrow of time, and whether we live in the kind of multiverse that might leave observable signatures in our local region. That’s research for you; we don’t know the answers ahead of time.

    One of the people in the audience was Chris Lintott, who wrote up a description for the BBC. Admittedly, this is difficult stuff to get all straight the very first time, but I think his article gives the impression that there is a much more direct connection between my arrow-of-time work and our recent paper on the lopsided universe. In particular, there is no necessary connection between the existence of a supermode and the idea that our universe “bubbled off” from a pre-existing spacetime. (There might be a connection, but it is not a necessary one.) If you look through the paper, there’s nothing in there about entropy or the multiverse or any of that; we’re really motivated by trying to explain an interesting feature of the CMB data. Nevertheless, our proposed solution does hint at things that happened before the period of inflation that set up the conditions within our observable patch. These two pieces of research are not of a piece, but they both play a part in a larger story — attempting to understand the low entropy of the early universe suggests the need for something that came before, and it’s good to be reminded that we don’t yet know whether stuff that came before might have left some observable imprint on what we see around us today. Larger stories are what we’re all about.