A New CMB Anomaly?

One of the important features of the universe around us is that, on sufficiently large scales, it looks pretty much the same in every direction — “isotropy,” in cosmology lingo. There is no preferred direction to space, in which the universe would look different than in the perpendicular directions. The most compelling evidence for large-scale isotropy comes from the Cosmic Microwave Background (CMB), the leftover radiation from the Big Bang. It’s not perfectly isotropic, of course — there are tiny fluctuations in temperature, which are pretty important; they arise from fluctuations in the density, which grow under the influence of gravity into the galaxies and clusters we see today. Here they are, as measured by the WMAP satellite.

Nevertheless, there is a subtle way for the universe to break isotropy and have a preferred direction: if the tiny observed perturbations somehow have a different character in one direction than in others. The problem is, there are a lot of ways this could happen, and there is a huge amount of data involved with a map of the entire CMB sky. A tiny effect could be lurking there, and be hard to see; or we could see a hint of it, and it would be hard to be sure it wasn’t just a statistical fluke.

In fact, at least three such instances of apparent large-scale anisotropies have been claimed. One is the “axis of evil” — if you look at only the temperature fluctuations on the very largest scales, they seem to be concentrated in a certain plane on the sky. Another is the giant cold spot (or “non-Gaussianity,” if you want to sound like an expert) — the Southern hemisphere seems to have a suspiciously coherent blob of slightly lower than average CMB temperature. And then there is the lopsided universe — the total size of the fluctuations on one half of the sky seems to be slightly larger than on the other half.

All of these purported anomalies in the data, while interesting, are very far from being definitive. Although most people seem to agree that they are features of the data from WMAP, it’s hard to tell whether they are all just statistical flukes, or subtle imperfections in the satellite itself, or contamination by foregrounds (like our own galaxy), or real features of the universe.

Now we seem to have another such anomaly, in which the temperature fluctuations in the CMB aren’t distributed perfectly isotropically across the sky. It comes by way of a new paper by Nicolaas Groeneboom and Hans Kristian Eriksen:

Bayesian analysis of sparse anisotropic universe models and application to the 5-yr WMAP data

Sexy title, eh? Here is the upshot: Groeneboom and Eriksen looked for what experts would call a “quadrupole pattern of statistical anisotropy.” Similar to the lopsided universe effect, where the fluctuations seem to be larger on one side of the sky than the other, this is an “elongated universe” effect — fluctuations are larger along one axis (in both directions) as compared to the perpendicular plane. Here is a representation of the kind of effect we are talking about — not easy to make out, but the fluctuations are supposed to be a bit stronger near the red dots than in the strip in between them.

It’s not a very large signal — “3.8 sigma,” in the jargon of the trade, where 3 sigma basically means “begin to take seriously,” but you might want to get as high as 5 sigma before you say “there definitely seems to be something there.” However, the WMAP data come in different frequencies (V-band and W-band), and the effect seems to be there in both bands. Furthermore, you can look for the effect separately at large angular scales and at small angular scales, and you find it in both cases (with somewhat lower statistical significance, as you might expect). So it’s far from being a gold-plated discovery, but it doesn’t seem to be a complete fluke, either.

Remember, looking for any specific effect is quite a project — there is a lot of data, and the analysis involves manipulating huge matrices, and you have to worry about foregrounds and instrumental effects. So why were these nice folks looking for a power asymmetry along a preferred axis in the sky? Well, you might recall my paper with Lotty Ackerman and Mark Wise, described in the “Anatomy of a Paper” series of blog posts (I, II, III). We were interested in whether the (hypothetical) period of inflation in the early universe might have been anisotropic — expanding just a bit faster in one direction than in the others — and if so, how it would show up in the CMB. What we found was that the natural expectation was a power asymmetry along the preferred axis, and gave a bunch of formulas by which observers could actually look for the effect. That is what Nicolaas and Hans Kristian did, with every expectation that they would establish an upper limit on the size of our predicted effect, which we had labelled g*. But instead, they found it! The data are saying that

g_* = 0.15 pm 0.039,.

So naturally, Lotty and Mark and I are brushing up on our Swedish in preparation for our upcoming invitations to Stockholm. Okay, not quite. In fact, it’s useful to be very clear about this, given the lessons that were (one hopes) learned in John’s series of posts about Higgs hunting. Namely: small, provocative “signals” such as this happen all the time. It would be completely irresponsible just to take every one of them at face value as telling you something profound about the universe. And the more surprising the result — and this one would be pretty darned surprising — the more skeptical and cautious we have every right to be.

So what are we supposed to think? Certainly not that these guys are just jokers that don’t know how to analyze CMB data; the truth couldn’t be more different. But analyzing data like this is really hard, and other groups will doubtless jump in and do their own analyses, as it should be. It’s certainly possible that there is a small systematic effect in WMAP — “correlated noise” — rather than in the universe. The authors have considered this, of course, and it doesn’t seem to fit the finding very comfortably, but it’s a possibility. The very good news is that the kind of correlated noise one would expect from WMAP (given the pattern it used to scan across the sky) is completely different from that the we would worry about from the upcoming Planck mission, scheduled to launch next year.

Or, of course, we could be learning something deep about the universe. Maybe even that inflation was anisotropic, as Lotty and Mark and I contemplated. Or, perhaps more plausibly, there is some single real effect in the universe that is conspiring to give us all of the tantalizing hints contained in the various anomalies listed above. We don’t know yet. That’s what makes it fun.

37 Comments

37 thoughts on “A New CMB Anomaly?”

  1. A semi-serious question: How many papers suggesting weird things to look for in the CMB have been written over the years, vs. how many weird things have been found? Does this “3.8 sigma” result include a trials penalty for this effect? (Will a trillion Cosmic Variance bloggers typing out a trillion papers eventually type up the Theory of Everything through pure chance?)

  2. Congratulations in advance Sean! Planck is highly likely to verify this effect to at least 5 sigma. Even if it doesn’t, we will probably learn something just as- or more significant- from your predictions, and this careful field work.

  3. It’s nice to hear that Planck will be able to give some independent confirmation (and on that note, it would be awesome with a layman’s guide to the detectors on Planck, Wilkinson and Cobe to shine a little light on how independent the measures will be), but is there anything else we can look at?

    The CMB is truly impressive, yes, but to an outsider it feels a bit … ‘thin’. It’s as if much of modern cosmology hangs on just this one map … That’s not the case of course (I do know of the evidence for dark matter and energy), but it would be nifty if any of these effects might show up outside of the CMB.

  4. Oh, and why wait for an excuse to learn Swedish? One can never speak too many languages!

  5. The Planck low-frequency instrument (LFI) will use similar detector technology to WMAP — high electron mobility transistors (HEMTs). This is the portion of Planck that has spectral overlap with WMAP. The high-frequency instrument (HFI) will use bolometers at its focal plane, which is a pretty different technology. The telescopes designs are fairly similar for technical reasons.

  6. I have a very basic question whenever I see these map data of the CMB. I’m sure it is probably simple astronomy, but here goes:

    Why is it an ellipse?

    Precisely what are we looking at that would make it elliptical? Is this a map of the sky (and if so, still… why elliptical) or in some way representative of the entire universe? How can an ellipse represent a map of the entire universe? When looking at this map what is our point of reference? Is this the view from earth system in some manner?

    I guess I’m looking for a big ‘you are here’ spot on the ‘map’ 🙂

  7. Luckily arXiv.org keeps old versions and WMAP posts to the arXiv on submission so you can still look at section 8.5 of Spergel etal 2006 (astro-ph/0603449v1) and see that the WMAP team looked for an effect like this in the 3 year data. We got Deltachi^2 = 3.4 for ell_max=1 and Deltachi^2 = 8 for ell_max = 2, which are quite consistent with the expected improvement for random data with 3 or 8 new parameters. The %$#@&^! referee made us take it out.

    All of these large angular scale effects will not be improved by Planck. WMAP is cosmic variance limited and Planck does not have a good scan pattern which will limit its performance at low ell.

  8. Ned, this is not a low-ell effect; it’s (supposed to be) at every ell. The fact that Planck has a different scan pattern is the whole reason it will be a good test; the correlated noise will be of a completely different form. (If I understand correctly.)

    manyoso, it’s just a projection of a sphere (in this case, the sky). An ellipse is just convenient.

  9. Hans Kristian Eriksen

    Hi Ned,

    I think perhaps you may be a bit confused here. You seem to be referring to the asymmetry feature, for which there indeed was an analysis in Spergel et al., with the results you quote above. However, this new effect is completely different from that, and has nothing to do with the asymmetry. The new effect is “cylinder symmetric”, with an overall quadrupolar patter. See Figure 2 of our paper to get an intuitive feel for what the signal looks like — essentially, the signal is correlated along the plane normal to the preferred direction, and unchanged along the preferred direction. It’s purely an a_lm correlation effect, not a power effect. (Any specific direction in model has identical amounts of power on the two hemispheres.) Note also, as Sean already pointed out, that this effect is *not* a low-l effect, but is seen independently in both l=2,100 and 100-400.

    As far as Planck vs. WMAP goes, this *is* one case where Planck’s scanning strategy will be very useful. As you know, Planck scans on essentially great circles through the ecliptic poles, and these will lie almost perpendicular to the signature found here, not parallel. WMAP’s scanning, on the other hand, is more similar to the signature in question. Further, since the effect is seen on all l’s, Planck will increase the S/N greatly on smaller scales. Most definitely, Planck will do a much better job at measuring (or dismissing) this effect than WMAP.

    So, at the moment, this looks quite interesting — but, as we point out in the paper, caution is warranted with respect to correlated noise. We need proper 5-year noise simulations in order to assess this properly.

    Finally, as far as the Spergel et al. analysis goes, I think it’s safe to assume that the reason that the referee asked for removing this, was simply that the execution of the analysis presented there was flawed on so many levels, and this was demonstrated quickly by two other papers (see Gordon et al. 2007 and Eriksen et al. 2008). Three specific examples are the fact that they 1) neglected to marginalize over monopoles and dipoles; 2) analysis at way too low resolution (Nside=8, lmax=23), resulting in serious underestimation of total significance, since the asymmetry is seen *at least* up to l=40; and 3) the degradation process from Nside=512 to Nside=8 was improperly executed, in that the resulting maps were not properly bandwidth limited, and this compromised the likelihood evaluation. Proper analysis, in which these points were corrected, showed that the asymmetry indeed *is* statistically significant (although marginally), even within the very conservative Bayesian evidence framework. Again, see Eriksen et al. 2008 (astro-ph/0701089) for full details.

    Anyway, even if one happens not to like the asymmetry effect, due to so-called “a-posteriori” arguments, one can never claim that the current anisotropy detection is an “a-posterior” effect. In this case, theoreticians made a specific prediction, and then that very same signature was indeed found in the data. Statistically, the situation is quite clean — and fortunately, more data will make it even clearer.

    The main outstanding issue right now is correlated noise. And only a proper analysis of 5-year simulations can resolve this question.

  10. Do the supposed anisotropies express themselves in any other kind of data (visible, radio, etc.)?

  11. Sean:
    As far as the Swedish comment shouldn’t you be in line behind Guth et al since your model depends on inflation? Maybe this year!!! Has there ever been a theoretical cosmologist winning the prize? According to Komatsu et al WMAP data “confirm the inflationary mechanism”. Hmmm. Does this mean that alternatives to inflation are now rejected by the physics community?

    As a retired engineer (radar systems enginner) I find the WMAP instrument very interesting. And the calibration and removal of background data and potential biases very impressive.

    However, one thing bothers me concerning the drawing of conclusions about physical models and their parameters, particularly assigning confidence bounds: The CMB data is fixed. You only have one data set. Yes you can collect more data to improve the SNR but there is still one data set.

    I believe you have expressed a bias in favor of the multi-verse idea in the past. If this idea is sound would you not expect the CMB data then to be just one sample from this hugh 10^500 or whatever sample space? How can anyone realistically talk about confidence levels if this idea is correct?

    Another comment. It seems that all of the baseline data analysis assume Gaussian statistics. If this assumption is not true then drawing any conclusions regarding confidence levels is really suspect correct?

    Just one last comment. At what confidence level must the data reflect in order to place indisputable acceptance of a physical model of the early universe?

    Thanks for your time and I hope to see the annoucement in the fall.

  12. Man, I only claim to understand about 30% of this and your previous “anatomy of a paper” post, but it’s still damn fun to feel like we’re part of the process! Thanks so much for your updates, Sean!

  13. Hans Kristian Eriksen

    Hi Cecil!

    I have two comments to your questions. First, yes, it is a “problem” that we only have one CMB sky. It would definitely be fun to have more 🙂 However, this isn’t really a big problem for the statistical treatment. In particular, the Bayesian framework is very well suited to handle these kinds of problems. In this language, what you want is the posterior, P(theta|data), where theta is some set of parameters of interest and data are your observations, which is *one* data set. Then, by Bayes theorem you can write

    P(theta|data) = P(data|theta) * P(theta) / P(data)

    In this expression, P(data|theta) is the so-called likelihood, and something you can evaluate for any set of parameters. P(theta) is a prior (which should capture what you already know about theta), and P(data) is the so-called Bayesian evidence, and doesn’t depend on theta at all.

    The best estimate of theta is the value for which P(theta|data) peaks. And this is something you can compute even for one data set — as we did in the present paper. But of course, with more independent universes the uncertainties would be smaller, and that’s always nice. But there’s no fundamental difference between one, two, ten or a thousand — the statistical treatment and interpretation is well defined in either case.

    Then, as far as the assumption of Gaussianity goes, you’re absolutely right that the quoted significances could be somewhat off if the sky is non-Gaussian. However, this would be a *much* bigger surprise — then one would *really* have a big result in hand! So assuming Gaussianity is the conservative approach — it’s the null-hypothesis, so to speak, and all other assumptions would be much more controversial.

  14. Lawrence B. Crowell

    The anisotropy doubtlessly has a Legendre type of expansion. These are usually Gaussian, where many of these deviations have some measure of kurtosis —- such as the “big hole” discovered a year ago.

    From the perspective of quantum physics if these results are real the impact is interesting. It might suggest something about deviations from usual quantum theory, which with general relativity might not be completely unexpected. These might be indicators of a back reaction by spacetime to quantum fields weakly coupled to spacetime during inflation.

    Lawrence B. Crowell

  15. The existence of “anomalies” that could have been due to random fluctuations (“cosmic variance”, indeed) just goes to show how slippery the whole idea and practice of “randomness” is. Unfortunately, we can never know whether a particular happening could have been caused by random factors or not. We can make up, by discretionary fiat, some standard like 95% chance of being chance (I correctly framed it, just making the wording cute) or 99.9% or whatever, but there’s no actual demarcation and no way to tell. Probability can’t be falsified. For example, I could try to “falsify” that a coin was fair by claiming if it came up 1,000 heads in a row how could it be, but that could (and eventually <would) happen.

    All that leads to weird paradoxes of course. For example, IDers could be quite right that there’s only a 1:10^400 chance of atoms coming together in the right way to create life, but in an infinite universe (BTW, is it?) then in every 10^397 or so cubic light years, that would happen anyway. We’d be it, to wonder how could it be. That’s why those arguments against “life being random” may not be any good. (But, not to be confused with claims of varying physical constants.) Who knows, or can know? Max Tegmark likes to play with stuff like that

    There is however a fundamental issue about definition of physical laws: suppose there are many many universes, but they have “the same laws as ours.” Well, in some of them Co60 will decay on average after 22 days instead of about 5 years (monkeys on the typewriter: sauce for the goose …) So, are the laws of physics really “different” there, making a contradiction, or ambiguous, to the extent laws literally are given in terms of empirical results? What if the scientists there used theory to calculate that it “should be” a 5 year half-life, that would be empirically “false” yet they’d be right in principle. Just food for thought.

  16. Neil,

    If what you said were true, then we couldn’t ever be confident about anything. Any experimental results that we ever obtain are always statistical in nature: there’s always a probability that we’re wrong, either due to statistical or systematic errors. The best that we can do, and what is done on any good analysis, is to place statistical limits upon any such results.

    We can then simply approach the system in question with a Bayesian approach and get some degree of confidence as to what’s going on. With your 1000 coin flips example, for instance, we would compute it as follows. Let’s imagine that we’ve just flipped a coin 1000 times, and come up with 1000 heads. The probability of this happening if the coin is a fair coin is around 1/10^300. And if it isn’t a fair coin? Well, we can’t say exactly, as there are many possible ways for the coin to be unfair, and each will come in with different probabilities. But, due to the large numbers at hand, anything but a completely unfair coin is highly unlikely to ever make it to 1000 heads in a row.

    So, then, we just have to ask the question: what is the prior probability we place on the coin being fair? Upon it only being capable of showing up heads? If, for example, we examine the coin and verify that it has both a heads side and a tails side, we gain confidence that it’s a fair coin and lose confidence that it’s unfair. If we measure its mass distribution and determine that it’s unlikely to prefer to land on one side or the other, then we again gain confidence that it’s a fair coin.

    All that said, because of the probabilities involved, it becomes exceedingly unlikely that we would ever come across a situation where all of the tests indicate that the coin should be fair, and then find 1000 flips in a row that result in heads. The chance of that happening is just so astronomically low as to not bother about.

  17. Hans:
    I think Neil was hinting (maybe more than hinting) at the issue I was trying rise. It becomes difficult to evaluate in any meaningful manner the statement “95% confidence in such and such” if in fact that there is a possibility of many realizations of our universe. Since there are no known priors for the many parameters that could define the universe of multiverses it becomes difficult to interpert confidence. That is why I was specifically asking about how the cosmological community accepts the idea of cofidence levels when evaluating one theory against another or defining “new physics”.

    BTW in radar detection theory Baysian models are used frequently but only when realistic priors are available otherwise one ends up arguing about assumptions. Great theory if you have the data.

  18. Cecil,

    There are lots of ways to put priors on different models even in ignorance, though. Typically the best priors to use are those based upon Occam’s Razor: downweight theories that have more parameters. There are different ways of doing this, but the basic idea is just that we need greater statistical significance to gain confidence in a theory that has more parameters.

    In this case, though, I think the only interesting question is whether or not the correlated noise has something to do with the it. Hans and collaborators have already demonstrated that correlated noise can replicate the effect, so it just remains to test the correlated noise with the WMAP analysis. It is also worrying that the apparent axis for this affect appears to coincide quite closely with the poles of the scanning strategy. The statistics are solid, the systematics need to be understood better.

  19. Jason:
    My question still is: At what confidence level is the cosmological community willing to assume new physics or reject one theoritical model over another? My basic concern with cosmological physics is that there is only one universe so no experiments can be duplicated only measurements repeated by others. But in the case of the CMB there is only one data set, period. Trying to decide what is a statistic variation as opposed to a real effect may not be objective.

  20. (BTW any comments from any cognoscenti are appreciated.)

    Jason Dick at 9:12 pm:

    Neil,

    If what you said were true, then we couldn’t ever be confident about anything. Any experimental results that we ever obtain are always statistical in nature: there’s always a probability that we’re wrong, either due to statistical or systematic errors. The best that we can do, and what is done on any good analysis, is to place statistical limits upon any such results.

    Well Jason, you’ve formally contradicted yourself but confirmed exactly what I said before – you just don’t realize that I am working the “matter of principle” and you are referencing the matter of degree (talking past each other, not apparently a direct disagreement.) First, I am right and you unwittingly acknowlege it: we can’t ever be sure, it’s a matter of degree. Sure, the chance of there being such a long run is tiny, but in degree not principle, and makes it formally impossible to falsify in Popperian terms. We can be “confident” (in the loose informal sense) but not certain nor can even define a category boundary of “reasonably certain logical type”, because as I said, we make judgment calls about how unlikely something is along a continuum and pick arbitrary pigeonholes thereby.

    But even then, in an infinite universe/s there will be regions or sequences of grotesquely improbable events, and the problems I raised are germane. Or, is “probability” even possible to define in an infinite universe at all, given the incommensurable nature of relative proportions in infinite sets (i.e., the Hilbert Hotel problem etc.?) But I find it odd that I thus shouldn’t worry about the risk of not wearing a seat belt etc. because of the boundary condition that the universe is infinite, and thus contains infinite copies of me/similar wearing or not wearing set belts to disallow any finite frequentist mass comparison – yet even if having a volume of say 10^30,000 light years, this would all be conventionally meaningful instead. I bring this up partly since some critics use infinite statistical measure problems to fend off some of my arguments about the chance of laws and universe behavior having such and such conveniently anthropic or even predictable form etc.

    REM also those problems occur even in a very very huge yet finite universe, as long as there’s plenty of space for extremely odd things like unexpected discernible patterns of radioactive decay, etc, to likely occur.)

  21. Hans Kristian Eriksen

    Joe:

    No, unfortunately, COBE has much too low sensitivity and angular resolution to be relevant for this analysis. The effect is just about starting to become visible when considering angular scales down to ~2 degrees, and to get strong results, one needs ~0.5 degrees. COBE, on the other hand, had 7 degrees resolution, and much too high noise. So there doesn’t seem to be many alternatives around besides waiting for Planck, really, although it’s possible that galaxy catalogs like SDSS or 2dF could be relevant.

Comments are closed.

Scroll to Top