The Lopsided Universe

Here’s a new paper of mine, with Adrienne Erickcek and Mark Kamionkowski:

A Hemispherical Power Asymmetry from Inflation

Abstract: Measurements of temperature fluctuations by the Wilkinson Microwave Anisotropy Probe (WMAP) indicate that the fluctuation amplitude in one half of the sky differs from the amplitude in the other half. We show that such an asymmetry cannot be generated during single-field slow-roll inflation without violating constraints to the homogeneity of the Universe. In contrast, a multi-field inflationary theory, the curvaton model, can produce this power asymmetry without violating the homogeneity constraint. The mechanism requires the introduction of a large-amplitude superhorizon perturbation to the curvaton field, possibly a pre-inflationary remnant or a superhorizon curvaton-web structure. The model makes several predictions, including non-Gaussianity and modifications to the inflationary consistency relation, that will be tested with forthcoming CMB experiments.

The goal here is to try to explain a curious feature in the cosmic microwave background that has been noted by Hans Kristian Eriksen and collaborators: it’s lopsided. We all (all my friends, anyway) have seen the pretty pictures from the WMAP satellite, showing the 1-part-in-100,000 fluctuations in the temperature of the CMB from place to place in the sky. These fluctuations are understandably a focus of a great deal of contemporary cosmological research, as (1) they arise from density perturbations that grow under the influence of gravity into galaxies and large-scale structure in the universe today, and (2) they appear to be primordial, and may have arisen from a period of inflation in the very early universe. Remarkably, from just a tiny set of parameters we can explain just about everything we observe in the universe on large scales.

The lopsidedness I’m referring to is different from the so-called axis of evil. The latter (in a cosmological context) refers to an apparent alignment of the temperature fluctuations on very large scales, which purportedly pick out a preferred plane in the sky (suspiciously close to the plane of the ecliptic). The lopsidedness is a different effect, in which the overall amplitude of fluctuations is a bit different (just 10% or so) in one direction on the sky than in the other. (A “hemispherical power asymmetry,” if you like.)

What we’re talking about is illustrated in these two simulations kindly provided by Hans Kristian Eriksen.

Untilted CMB

Tilted CMB

I know, they look almost the same. But if you peer closely, you will see that the bottom one is the lopsided one — the overall contrast (representing temperature fluctuations) is a bit higher on the left than on the right, while in the untilted image at the top they are (statistically) equal. (The lower image exaggerates the claimed effect in the real universe by a factor of two, just to make it easier to see by eye.)

What could cause such a thing? Our idea was that there was a “supermode” — a fluctuation that varied uniformly across the observable universe, for example if we were sampling a tiny piece of a sinusoidal fluctuation with a wavelength many times the size of our current Hubble radius.

The blue circle is our observable universe, the green curve is the supermode, and the small red squiggles are the local fluctuations that have evolved under the influence of this mode. The point is that the universe is overall just a little bit more dense on one side than the other, so it evolves just slightly differently, and the resulting CMB looks lopsided.

Interestingly, it doesn’t quite work; at least, not in a simple model of inflation driven by a single scalar field. In that case, you can get the power asymmetry, but there is also a substantial temperature anisotropy — the universe is hotter on one side than on the other. There are a few back-and-forth steps in the reasoning that I won’t rehearse here, but at the end of the day you get too much power on very large scales. It’s no fun being a theoretical cosmologist these days, all the data keeps ruling out your good ideas.

But we didn’t give up! It turns out that you can make things work if you have two scalar fields — one that does the inflating, cleverly called the “inflaton,” and the other which is responsible for the density perturbations, which should obviously be called the “perturbon” but for historical reasons is actually called the “curvaton.” By decoupling the source of most of the density in the universe from the source of its perturbations, we have enough wiggle room to make a model that fits the data. But there’s not that much wiggle room, to be honest; we have an allowed region in parameter space that is not too big. That’s good news, as it brings the hope that we can make relatively precise predictions that could be tested by some means other than the CMB.

One interesting feature of this model is that the purported supermode must have originated before the period of inflation that gave rise to the smaller-scale perturbations that we see directly in the CMB. Either it came from earlier inflation, or something entirely pre-inflationary.

So, to make a bit of a segue here, this Wednesday I gave a plenary talk at the summer meeting of the American Astronomical Society in St. Louis. I most discussed the origin of the universe and the arrow of time — I wanted to impress upon people that the origin of the entropy gradient in our everyday environment could be traced back to the Big Bang, and that conventional ideas about inflation did not provide straightforward answers to the problem, and that the Big Bang may not have been the beginning of the universe. I was more interested in stressing that this was a problem we should all be thinking about than pushing any of my favorite answers, but I did mention my paper with Jennie Chen as an example of the kind of thing we should all be looking for.

To an audience of astronomers, talk of baby universes tends to make people nervous, so I wanted to emphasize that (1) it was all very speculative, and (2) even though we don’t currently know how to connect ideas about the multiverse to observable phenomena, there’s no reason to think that it’s impossible in principle, and the whole enterprise really is respectable science. (If only they had all seen my bloggingheads dialogue with John Horgan, I wouldn’t have had to bother.) So I mentioned two different ideas that are currently on the market for ways in which influences of a larger multiverse might show up within our own. One is the idea of colliding bubbles, pursued by Aguirre, Johnson, and Shomer and by Chang, Kleban, and Levi. And the other, of course, was the lopsided-universe idea, since our paper had just appeared the day before. Neither of these possibilities, I was careful to say, applies directly to the arrow-of-time scenario I had just discussed; the point was just that all of these ideas are quite young and ill-formed, and we will have to do quite a bit more work before we can say for sure whether the multiverse is of any help in explaining the arrow of time, and whether we live in the kind of multiverse that might leave observable signatures in our local region. That’s research for you; we don’t know the answers ahead of time.

One of the people in the audience was Chris Lintott, who wrote up a description for the BBC. Admittedly, this is difficult stuff to get all straight the very first time, but I think his article gives the impression that there is a much more direct connection between my arrow-of-time work and our recent paper on the lopsided universe. In particular, there is no necessary connection between the existence of a supermode and the idea that our universe “bubbled off” from a pre-existing spacetime. (There might be a connection, but it is not a necessary one.) If you look through the paper, there’s nothing in there about entropy or the multiverse or any of that; we’re really motivated by trying to explain an interesting feature of the CMB data. Nevertheless, our proposed solution does hint at things that happened before the period of inflation that set up the conditions within our observable patch. These two pieces of research are not of a piece, but they both play a part in a larger story — attempting to understand the low entropy of the early universe suggests the need for something that came before, and it’s good to be reminded that we don’t yet know whether stuff that came before might have left some observable imprint on what we see around us today. Larger stories are what we’re all about.

40 Comments

40 thoughts on “The Lopsided Universe”

  1. I read about this on the BBC website and was extremely thrilled.

    Who knew that anisotropy would become one of the most important concepts in our scientific array?

    As well, the research features a remarkable young woman, Adrienne Erickcek. How on earth can anyone so young be so very accomplished?

  2. Pingback: Not Even Wrong » Blog Archive » Hints of ‘time before Big Bang’

  3. Sean,

    This is just excellent! I heartily agree that if a multiverse, or phylogenically developing repeating similarverse exists, we WILL be able to detect it.

    The math has been around for years. Some pretty direct evidence (mass distribution) not related to the CMB is available and we understand the engineering necessities related to the existence of complex information and structure in the universe. Considering these circumstances, the fact the the CMB contains evidence for possible pre- big bang existence should really not be a surprise at all.

    You are not ready to be assertive about the connection of this CMB observation and your work on time direction and process, and it is easy to understand why…it is much too early to make speculative assertions.

    Still, for the reasons I listed above, I believe personally that the nature of time process in the universe is linked to this observation and its pre-big bang implications…

    Very Exciting!

  4. Pingback: Chris Lintott’s Universe » In his own words

  5. It’s good to note though that the claimed hemispherical asymmetry is not yet convincingly
    statistically signitficant. Eriksen et al point that caveat out quite nicely in their paper – the significance is 99 percent, which is simply not very convincing yet.
    To convince an observational astronomer like me (I have seen my fair share of 3 sigma results that failed to materialize with better datasets), it will probably take the Planck dataset; or a
    more thorough foreground-analysis of the WMAP data.

  6. Hi Sean,

    I was really taken aback by this BBC article saying that you claimed a connection between a large scale asymmetry and testing the “multiverse”! Perhaps there is some very tenuous connection to what happened before inflation but nothing like what is implied in that article. Your post has made things a bit clearer.

    By the way (in case some readers are interested in some background), the idea of a spontaneous breaking of isotropy by a “supermode” was first proposed by Chris Gordon et al (here and here). The difficulty with testing such ideas lies not only with the low significance of the observation, but the a posteriori nature of the way that significance was inferred.

    Fortunately, if the temperature anisotropy is indeed telling us that there is an asymmetry, it makes a prediction for the polarization pattern of the microwave background anisotropies. Cora Dvorkin, a grad student at Chicago, did some nice work last year which shows how we can test whether this observation is telling us that there is a significant anomaly in the first place. This can be applied to much better polarization data that’s coming in the next couple of years.

    The larger stories draw in the crowds, but sometimes its nice to mention the smaller stories, the careful studies that sort out the wheat from the chaff of observations, because it’s on the basis of these analyses that the big ideas live or die.

    Hiranya

  7. The extra field certainly participates in re-heating, and contributes to the local density; that’s where perturbations come from. No affect on nucleosynthesis, as everything is completely thermalized long before you get there.

    Hiranya, thanks, I should have done a better job of referring to the earlier work in the blog post. They are referenced in the paper, but even there we were up against a word limit and didn’t go into any detail. Chris’s work is certainly right on topic; we went further in connecting the supermode to inflation.

  8. I’ll take care to look at your paper later. For the moment, are you saying that this “lop-sidedness” cannot be explained by Gaussian statistics? What I mean is, if I ask two people to flip a coin 100 times, I’m not surprised if one gets heads 10% more frequently than the other one (in fact, I think the differences will typically be larger than that… but of course this relates to whether we flip the coin 100 times, 1000 times, or 10,000 times… in the latter case I agree a 10% difference is quite an anomoly) .

  9. Nevermind… I only had to glance at your paper to see my point is addressed in your references.

  10. For a fun time, align the two images rotated 90 degrees (either way) on your monitor, then view them 3D ala the “Magic Eye” images. For additional fun, alternate visualizing the image as being convex and concave.

    As for the universe, unless it were to be completely homogenous, wouldn’t it then necessarily be lopsided? Or is the lopsidedness greater than the standard deviation to be expected from a random distribution?

  11. Albatross (#13): this is what I was getting at in the post above yours, but apparently it has been checked that the difference is statistically significant. An initial concern of mine was, how is this check performed? Sometimes this is harder to do than you’d guess, because its tempting to calculate the probability to obtain a certain imbalance (pointing in a given direction) when really there’s no reason for us to care about one direction more than another, so you should account for the possibility for the imbalance to point in any direction. You can account for this by thinking carefully, but I prefer an empirical approach, where one simply runs a huge number of simulations and sees how many have the peculiar feature one is looking for. From how Sean and his collaborators describe the work they cite, it appears this was done, and anisotropies like what is observed occur in less than 1% of the sample.

  12. The statistical significance is obviously an important issue. Straightforwardly, it’s a 3-sigma result, which is pretty good if you really think you understand all of the systematic errors. Maybe you don’t, of course. (Certainly they do take into account the fact that you could find an anisotropy in any direction a priori, thus raising the chance of a random effect.) The WMAP folks were skeptical of the initial claim, but the seem to admit that it’s there, even in subsequent years’ data. One of the reasons why it’s important to get explicit theoretical models is that we can then ask what other tests can be done, besides the CMB.

  13. Okay, one last time: we’re not here to discuss Peter, his blog comment policies, his notions of etiquette, or his sex life. I’ve deleted a bunch of things, and will continue to do so, so why bother?

  14. Hans Kristian Eriksen

    Just a few comments:

    First of all, as several of you have noted already, the question of statistical significance is of course the over-shadowing question in all of this. And here there are some different schools of thoughts around. On the one hand, if you take a frequentist approach, you simply ask “Given this particular model, how often does the a particular data set have such or such properties?” In this particular case, we would ask “Given that the underlying model is indeed homogeneous and isotropic, how often would we see a *more* asymmetric universe than the one we actually observe?” The answer to this question is easily answered using massive computer simulations, and turns out to be <~ 1%. So it’s not very likely — but it could be due to “bad luck”.

    The other line of thought is the Bayesian approach, in which you rather ask “Given this particular observed data set, which model is more likely?” A common tool to compare such models is the Bayesian evidence, which is nothing but the average likelihood over the prior volume. The nice thing about this is that it implements Occam’s razor in a quantitative approach, and so it addresses Hiranya’s “a-posteriori concerns” in a natural way — if it was just an effect of “noticing something odd in the data”, it would get heavily penalized by the larger prior volume.

    As it turns out, the Bayesian evidence ratio between the anisotropic and the isotropic models comes out to be roughly 6-to-1 in favour of the anisotropic model, even after taking into account “Occam’s razor”, which conventionally is considered “substantial evidence”. It’s not decisive, but it’s most definitely interesting.

    So that’s the current situation regarding statistical significances.

    A second issue concerns systematic errors. In this respect, one is well adviced to be cautious. However, one should also note that a major advantage of the large-scale CMB temperature measurements is that they are extraordinarily clean. For an astronomer who is used to Lyman alpha measurements, or a CERN particle physist who spends his time worrying about the train schedule between Paris and Geneva, it would be like entering into a completely different world when going to large-scale CMB temperature measurements. On the one hand, the signal-to-noise ratio of ~5 degree scales in WMAP is like three or four orders of magnitude. So it doesn’t matter what your noise is — it’s irrelevant on these scales. (As far as large scales goes, WMAP is a perfect imager.) On the other hand, WMAP measures on five different frequencies, and this gives us a pretty good image of the foreground composition of the data. We (and, by now, many other independent groups) have also done lots of other tests with respect to these issues, and the tests always come out negative — the asymmetry is very robust with respect to foregrounds or systematic issues.

    The way to think about this is this: If the asymmetry disappears when Planck arrives, then the entire WMAP *picture* we are so used to looking at, is just plain wrong. It could of course happen — but then the asymmetry would be the smallest of our concerns. In that case, WMAP couldn’t be trusted at all, and all the stuff we’ve been talking about for the last five years or so would pretty much have been a waste of time.. 🙂

    So I think it’s fair to say that, from an image analysis point of view, the asymmetry is definitely there: The picture is what it is. The question is just what it means: Is it a ~1% statistical fluke? Or is it a signature of new physics? For now, that remains uncertain. But fortunately, Planck will tell us a whole lot more about this in just a few years. However, right now I think it’s very fun indeed to see that theorists pick up on this issue in a serious way, and come up with new ideas, like this new one presented by Adrienne, Sean and Marc 🙂

  15. temperature anisotropy occurs in instances where whats generating (or has generated) the heat isnt a “constant” or has partial obscurity. (cookers hotplate half covered with metal sheet will display temperature anisotropy when viewed from far above)

  16. Though i wonder in laymens terms, does the temperature anisotropy evident there “shift” alittle during long-term observation of the CMB?

  17. Sean, and excellent discussion, thank you: and also thanks for the discussion of the origin of the universe and the arrow of time. In my unsophisticated way I have long been intrigued by the arrow of time. Sorry if I digress from the topic of your article.

    I may be simply verifying an old cliché – to paraphrase: “It is better to lurk and let the world think I’m a fool then to comment and remove all doubt” – I (possibly foolishly) speculate as follows and find myself in an apparent contradiction.
    If we view the universe as the ultimate information processor and invoke Landauer’s Principle, then, if entropy is at a minimum at the instant of the “big bang”, information is a maximum at that instant. If the universe then evolves towards a point where eventually entropy reaches a maximum value (near infinite) then the universes information content evolves towards zero. Here I am assuming that time, necessary for the universe to “evolve”, is associated with entropy increasing.

    At the risk of wandering into metaphysics, I make the assumption that “information” may be equated in some way with “existence”. I.e. For an entity to exist, that entity must be associated with information: otherwise we have no way of knowing that it exists. An entity that contains zero information then cannot exist. Thus the universe could be said to evolve from a state of “maximum existence” towards a state of “non-existence”. If I may stretch an analogy, this situation appears to be similar to the evolution of a virtual particle and time would be a variable used to describe the evolution of the wave function

    If I now apply the uncertainty principle to the “virtual universe”, i.e. d(energy) x d(time of measurement) is greater than (h/2pi), and assume that at any instant the total energy of the universe is zero hence d(energy) must also be zero. Then in this greater space time, d(time) must be arbitrarily large: i.e. I arrive at a steady state universe which would seem to indicate that no information is being lost and entropy cannot be increasing. In addition if for a computational operation in which 1 bit of logical information is lost, the amount of entropy generated is at least k ln 2, and so the energy that must eventually be emitted to the environment is E ? kT ln 2. (The environment here is the spacetime that the universe inhabits as a virtual entity.) However, if the total energy of the universe is zero at every instant, it cannot emit energy into its environment, and information cannot be lost and entropy cannot be increasing!

    This speculation would seem to indicate that either the total energy of the universe is non zero, that Landauer’s Principle cannot be applied to the universe and hence the universe is not a “computing machine”, or that time cannot be associated with increasing entropy (or, more likely, that I have made some gross logical error)!
    Have I erred?

  18. >Okay, one last time: we’re not here to discuss Peter,
    >his blog comment policies, his notions of etiquette,
    >or his sex life. I’ve deleted a bunch of things, and
    >will continue to do so, so why bother?

    Hmmmm…. so that means that there is no safe haven for people who try to criticize Woit – because he deletes them all if you try that on his blog. I am sorry, but I was really getting sick of nobody willing to hear our side of the story. I apologize if I was a bit brutal with him here.

    Anyway, thanks for answering my silly questions about reheating, Sean.

  19. Lawrence B. Crowell

    I don’t know if this will help, but this might not be due to other universes and the like, but due to the structure of the quantum spacetime in the preinflationary period.

    We might imagine (imagine — an important word) that the four dimensional spacetime of the early universe is defined on an ADM “sandwich,” where one spatial surface contains the initial conditions of the universe and the other is a “slice” taken at the onset of inflation. This four dimensional volume contains all the quantum information of the universe. We might think of it as a set of all quantum fields, including gravity, and is a space or superspace described by a lattice system, say the E_8 group. The E_8 contains the Clifford Cl(8), or the “120” the system of roots given by the 120-cell, plus the “128.” We will focus on the 120, which are a representation of quaternionic fields.

    The 120 cell contains 120 dodecahedra, three for each three dimensional face. For the 120-cell the boundary space is SO(3)/A_5 for A_5 the alternating group of the dodecahedral group. This then defines the boudning three sphere of the four volume a Poincare homology sphere. By the Rokhlin theorem a region of a four dimensional space that is bounded by a homology 3-sphere is a spin manifold. We then can embed the exceptional group F_4 in the Cl(8), which is by Musin’s theorem the minimal sphere packing configuration of a 4-dim space. The “spheres” we might think of as Planck units of volume. For the four volume restricted to the 24-cell the bounding space is then given by the quotient space SO(3)/I, where I is the binary icosahedral group given by the cyclotomic field on F_9.

    This binary discrete structure might then impose a “dipole” configuration on the spacetime. I emphasize the word “might.” Yet this strange configuration to the CMB might be a signature of the quaternionic structure of the universe in the inflationary or pre-inflationary phase. If so then this would mean this can be understood entirely from the observed nature of the universe, with no need for unobservable “other universes.”

    Lawrence B. Crowell

  20. Lawrence B. Crowell

    PS.

    I forgot to include that the breaking up of the 120 to the 24-cell, the breaking of Cl(8) to the smaller exceptional group F_4, might physically be associated with the transition of the universe from a quantum wave functional over all (or many) possible metric configurations to a classical or semi-classical spacetime.

    Tim Eby on Jun 9th, 2008 at 5:14 pm WROTE:

    However, if the total energy of the universe is zero at every instant, it cannot emit energy into its environment, and information cannot be lost and entropy cannot be increasing!

    I think the existence of entropy might not be due to the outright destruction of information, but rather its concealment. In fact I will use the term encryption, where quantum information is transformed in ways that an observer who lack the appropriate “key” is unable to cypher. In our physics classes we coarse grain things or do “sums over states” and other things which are a way of burying away things we can’t tractably work with.

    Lawrence B. Crowell

  21. As to the image, “is what it is “and then to think “genus figures could be allocated to the description of the universe” may actually be then be held relevant?

    WMAP has produced a new, more detailed picture of the infant universe. Colors indicate “warmer” (red) and “cooler” (blue) spots. The white bars show the “polarization” direction of the oldest light. This new information helps to pinpoint when the first stars formed and provides new clues about events that transpired in the first trillionth of a second of the universe.

    Might it then not be as if “holes exist in the universe” with which such calculations made in terms of Lagrangian’s that allow satellites to traverse this universe given space in the simplest energy configuration. So over all, such polarizations would ultimately show an outcome which does rest in the valley, as that WMAP. It reminded me of Wayne Hu’s polarization map.

    B-modes retain their special nature as manifest in the fact that they can possess a handedness that distinguishes left from right. For example here are two polarization fields with the same structure but in the E-mode on the left and the B-mode on the right:

  22. Lawrence B. Crowell

    Plato on Jun 9th, 2008 at 8:35 pm

    As to the image, “is what it is “and then to think “genus figures could be allocated to the description of the universe” may actually be then be held relevant?

    —————-

    Topology might indeed play a role here. The question is what topology, and what physics does it imply?

    Lawrence B. Crowell

  23. Gary Bridgewater

    This sort of thing usually leads to a new theory. Are there other whole-sky observation sets that show any such striking asymmetry? Visible light? IR? Any correlation with them?

  24. I want to know that does such lopsidedness anything to do with the uncertainty (maybe larger than 3 sigma) in the polarization spectra from WMAP at the small angular scale which corresponding to the large scale anisotropy of the CMB?

Comments are closed.

Scroll to Top