WMAP results — cosmology makes sense!

I’ll follow Mark’s suggestion and fill in a bit about the new WMAP results. The WMAP satellite has been measuring temperature anisotropies and polarization signals from the cosmic microwave background, and has finally finished analyzing the data collected in their second and third years of running. (For a brief explanation of what the microwave background is, see the cosmology primer.) I just got back from a nice discussion led by Hiranya Peiris, who is a member of the WMAP team, and I can quickly summarize the major points as I see them.

WMAP spectrum

  • Here is the power spectrum: amount of anisotropy as a function of angular scale (really multipole moment l), with large scales on the left and smaller scales on the right. The major difference between this and the first-year release is that several points that used to not really fit the theoretical curve are now, with more data and better analysis, in excellent agreement with the predictions of the conventional LambdaCDM model. That’s a universe that is spatially flat and made of baryons, cold dark matter, and dark energy.
  • In particular, the octupole moment (l=3) is now in much better agreement than it used to be. The quadrupole moment (l=2), which is the largest scale on which you can make an observation (since a dipole anisotropy is inextricably mixed up with the Doppler effect from our motion through space), is still anomalously low.
  • The best-fit universe has approximately 4% baryons, 22% dark matter, and 74% dark energy, once you combine WMAP with data from other sources. The matter density is a tiny bit low, although including other data from weak lensing surveys brings it up closer to 30% total. All in all, nice consistency with what we already thought.
  • Perhaps the most intriguing result is that the scalar spectral index n is 0.95 +- 0.02. This tells you the amplitude of fluctuations as a function of scale; if n=1, the amplitude is the same on all scales. Slightly less than one means that there is slightly less power on smaller scales. The reason why this is intriguing is that, according to inflation, it’s quite likely that n is not exactly 1. Although we don’t have any strong competitors to inflation as a theory of initial conditions, the successful predictions of inflation have to date been somewhat “vanilla” — a flat universe, a flat perturbation spectrum. This expected deviation from perfect scale-free behavior is exactly what you would expect if inflation were true. The statistical significance isn’t what it could be quite yet, but it’s an encouraging sign.
  • A bonus, as explained to me by Risa: lower power on small scales (as implied by n<1) helps explain some of the problems with galaxies on small scales. If the primordial power is less, you expect fewer satellites and lower concentrations, which is what we actually observe.
  • You need some dark energy to fit the data, unless you think that the Hubble constant is 30 km/sec/Mpc (it’s really 72 +- 4) and the matter density parameter is 1.3 (it’s really 0.3). Yet more proof that dark energy is really there.
  • The dark energy equation-of-state parameter w is a tiny bit greater than -1 with WMAP alone, but almost exactly -1 when other data are included. Still, the error bars are something like 0.1 at one sigma, so there is room for improvement there.
  • One interesting result from the 1st-year data is that reionization — in which hydrogen becomes ionized when the first stars in the universe light up — was early, and the corresponding optical depth was large. It looks like this effect has lessened in the new data, but I’m not really an expert.
  • A lot of work went into understanding the polarization signals, which are dominated by stuff in our galaxy. WMAP detects polarization from the CMB itself, but so far it’s the kind you would expect to see being induced by the perturbations in density. There is another kind of polarization (“B-mode” rather than “E-mode”) which would be induced by gravitational waves produced by inflation. This signal is not yet seen, but it’s not really a suprise; the B-mode polarization is expected to be very small, and a lot of effort is going into designing clever new experiments that may someday detect it. In the meantime, WMAP puts some limits on how big the B-modes can possibly be, which do provide some constraints on inflationary models.

Overall — our picture of the universe is hanging together. In 1998, when supernova studies first found evidence for the dark energy and the LambdaCDM model became the concordance cosmology, Science magazine declared it the “Breakthrough of the Year.” In 2003, when the first-year WMAP results verified that this model was on the right track, it was declared the breakthrough of the year again! Just because we hadn’t made a mistake the first time. I doubt that the third-year results will get this honor yet another time. But it’s nice to know that the overall paradigm is a comfortable fit to the universe we observe.

The reason why verifying a successful model is such a big deal is that the model itself — LambdaCDM with inflationary perturbations — is such an incredible extrapolation from everyday experience into the far reaches of space and time. When we’re talking about inflation, we’re dealing with the first 10-35 seconds in the history of the universe. When we speak about dark matter and dark energy, we’re dealing with substances that are completely outside the very successful Standard Model of particle physics. These are dramatic ideas that need to be tested over and over again, and we’re going to keep looking for chinks in their armor until we’re satisfied beyond any reasonable doubt that we’re on the right track.

The next steps will involve both observations and better theories. Is n really less than 1? Is there any variation of n as a function of scale? Are there non-Gaussian features in the CMB? Is the dark energy varying? Are there tensor perturbations from gravitational waves produced during inflation? What caused inflation, and what are the dark matter and dark energy?

Stay tuned!

More discussion by Steinn Sigurðsson (and here), Phil Plait, Jacques Distler, CosmoCoffee. In the New York Times, Dennis Overbye invokes the name of my previous blog. More pithy quotes at Nature online and Sky & Telescope.

62 Comments

62 thoughts on “WMAP results — cosmology makes sense!”

  1. Pingback: Zooglea » universo

  2. Haelfix,

    my understanding of the ekpyrotic model is that at present they are not viable alternatives to inflation….not for experimental reasons but even before: they contain a singularity in the equations, which is still not resolved, I believe. And this precludes the possibility of computing the spectral index.

    So, the message is”don’t bother with that, unless somebody proves that equations can work”. For the cyclic case I don’t know.

    (Of course if some unknown physicists had proposed the very same model, nobody would have paid attention to it….)

  3. Hiranya,

    if I look at the abstract of the WMAP paper on implications for cosmology I find
    n_S=0.951+0.015-0.019

    It means that it is a 3 sigma detection of n

  4. (my last message seems to be incomplete, sorry if I put it again)

    Hiranya,

    if I look at the abstract of the WMAP paper on implications for cosmology I find
    n_S=0.951+0.015-0.019

    It means that it is a 3 sigma detection of n_S.

    Is that a good interpretation(or I should worry about systematics?)?
    Is there any prior that can make the detection weaker if relaxed?

    thanks!

  5. Haelfix #25: As far as I understand, the predictions for the scalar spectral index for the Ekpyrotic model are controversial. If the prediction is not robust, its not meaningful say its ruled in or out. If primordial tensors are detected it will indeed rule out these models. And our current constraints on primordial non Gaussianity are too weak to say anything about inflation *or* Ekpyrotic model – they are both consistent.

    Arnold #28: On face value yes, but this error bar is somewhat sensitive to (small) systematic uncertainties, for example in marginalizing over the SZ effect, and the way the beam errors are propagated in the likelihood function. However, the HZ model is indeed disfavoured wrt to the data.

  6. #30: clarification – sorry, HZ model is the Harrison Zeldovich model, the exactly scale invariant spectrum.

  7. This is hyped it up to get media attention: the CBR from 300,000 years after BB says nothing of the first few seconds, unless you believe their vague claims that the polarisation tells something about the way the early inflation occurred. That might be true, but it is very indirect.

    I do agree with Sean on CV that n = 0.95 may be an important result from this analysis. I’d say it’s the only useful result. But the interpretation of the universe as 4% baryons, 22% dark matter and 74% dark energy is a nice fit to the existing LambdaCDM epicycle theory from 1998. The new results on this are not too different from previous empirical data, but this ‘nice consistency’ is a euphemism for ‘useless’.

    WMAP has produced more accurate spectral data of the fluctuations, but that doesn’t prove the ad hoc cosmological interpretation which was force-fitted to the data in 1998. Of course the new data fits the same ad hoc model. Unless there was a significant error in the earlier data, it would do. Ptolemies universe, once fiddled, continued to model things, with only occasional ‘tweaks’, for centuries. This doesn’t mean you should rejoice.

    Dark matter, dark energy, and the tiny cosmological constant describing the dark energy, remain massive epicycles in current cosmology. The Standard Model has not been extended to include dark matter and energy. It is not hard science, it’s a very indirect interpretion of the data. I’ve got a correct prediction made without a cosmological constant made and published in ’96, years before the ad hoc Lambda CDM model. Lunsford’s unification of EM and GR also dismisses the CC.

  8. Hiranya, I’m afraid I don’t have too many detailed ideas about how to do a blind analysis for CMB, since I’m quite hazy on the details of how the analyses are done in detail. The experimenters themselves are the best people to figure this out. I’ve been asking CMB types for years, including WMAP team members, about this, but no one has taken up the challenge, which I find a little disappointing. But I will offer some ideas below.

    I’m afraid that fitting multiple models doesn’t do much to reduce the potential for bias, if the people doing the fit know which model is the favoured one. If you know that LCDM is the favoured model, then you might potentially be biasing analysis choices to improve its fit, even if you are also fitting other models you are less interested in.

    The best I can offer for doing a blind CMB analysis is this:

    1. Ensure that the business of generating the temperature and polarization maps is completely separate from the business of doing cosmological fits. In principle the maps should be completely finished and frozen before anyone even thinks about fitting the data to a cosmological model. Once even a single fit is done, then you cannot go back and change anything in the map generation. This kind of strict segregation ensures that experimenters can’t go back and do something like twiddle with foreground subtraction in order to make an unruly data point come into closer agreement.
    2. A common technique we use in HEP is to include hidden “offsets” in our fits. For example, you could get a colleague who is not involved in the analysis to code up a secret offset that gets added to n in the cosmological fits. Then when you run your fit, the analyst isn’t looking directly at n, but rather the code is outputting n+x, where x is some unknown offset that the analyst knows nothing about. Once the fit is finalized and you’ve written the entire paper except for the conclusion section, you reveal the secret value of x and subtract it from the fitted value of ‘n’ to get the true value. I really recommend doing this—it’s trivial to do, and would completely eliminate any worries that the analysis was subconsciously being tweaked to favour some particular value of n, such as 0.95 or 1.0. In fact, I can’t see any reason why you WOULDN’T include a secret offset in the fit, since it’s so easy to do.

    Some parameters are easier to hide than others by using a secret offset, of course. An experienced CMB hand can probably read off Omega=1 from the first acoustic peak just by looking at the power spectrum without even doing a fit. But there are probably enough other parameters that could be hidden from the analysts in this way to make it worth doing.

    An instructive example of why HEP has gone to using blind analyses can be seen at:
    http://pdg.lbl.gov/2005/reviews/historyrpp.pdf

    This page shows the historical trends for measurements of many particle properties. Look, for example, at the middle plot in the second row. See how the measured values jump discontinuously and by amounts far larger than the quoted uncertainties. What you’re seeing there is probably a case where sucessive experiments each were biased towards getting the same result as the previous experiment, until someone comes along and does an experiment with such a different value that a seismic shift happens, and everyone starts biasing to a different value! 😉

  9. Sean,

    Googling the Planck Satellite shows a projected launch date of 2007. Is this still about right? Also how long after launch will data collection and analysis occur? Ballpark is fine.

    Thanks,

    Elliot

  10. So is it safe to say the l=2 anomaly is curious but relatively unimportant, i.e., it’s a deviation from a model so weakened at that angular scale by uncertainties due to cosmic variance it’s just not very interesting?

  11. Biologist, I was wondering the same. You could conceivably draw the curve quite differently at that end of scale the to fit the data.

  12. Well, I’ve absolutely no personal axe to grind on the matter, whatever the answer, and the only reason I pester about it myself relates to what appears to be among the least likely implications of those so-called anomalies, namely that the universe might have some detectable “non-trivial topology”.

  13. Pingback: It’s Equal but It’s Different » Blog Archive » Science friday!

  14. Scott #35: Thanks for the detailed comments! These ideas are very interesting. Some are actually very easy to implement, like the suggestion of offsets. Others, like keeping the parameter analysis and the data analysis completely separate, are already partially true, but in practice there is some overlap. I can tell you that parameter analysis people would *love* for the maps and cls and errors to be frozen before doing parameter runs! However people continue to make improvements right up to the wire. We don’t have the gigantic collaborations and (wo)man power of experimental particle physics. We are a small group of people. However as I said, everything is public, from the timestream to the final likelihood analysis, and others can (and will) try different analysis approaches on our data.

  15. You need a three dimensional drumhead to relate sound to the WMAP. More on name.

    In the case of pushing perspective, I like referring to the Chladni plate to help one see further beyond the measures indicated. Changes perrspective about how we might see the universe using WMAP.

    Will it help? I don’t really know.:)

  16. It’s not safe to say that the l=2 anomaly is unimportant. We just don’t know — maybe it’s just an accident, maybe it’s an indication of something super-significant. Until we have some notion of what that thing might be, and a way to independently verify it, “we don’t know” is the best we can do for the moment.

  17. Whew! I was starting to think I’d asked an offensive question or something! Thanks for the response, Dr. Carroll.

  18. Thanks very much, Sean!

    Hiranya – sorry to harp on #14 – as far as I know, cosmic variance ~ 1/sqrt(l(l+1)) – if this is true, then the width of the grey area should be less at l=200 than it is at l=100, but it actually seems to be thicker! Is this an artifact of the way the plots are made, or am I missing something? Again, sorry: I am a newbie!

    Savya

  19. What kind of limits (if any) on baryon- dark matter interaction are required for big bang nucleosynthesis to come out right?

  20. Plato, see here:

    Constraining Strong Baryon-Dark Matter Interactions with Primordial Nucleosynthesis and Cosmic Rays

    Self-interacting dark matter (SIDM) was introduced by Spergel & Steinhardt to address possible discrepancies between collisionless dark matter simulations and observations on scales of less than 1 Mpc. We examine the case in which dark matter particles not only have strong self-interactions but also have strong interactions with baryons. The presence of such interactions will have direct implications for nuclear and particle astrophysics. Among these are a change in the predicted abundances from big bang nucleosynthesis (BBN) and the flux of gamma-rays produced by the decay of neutral pions which originate in collisions between dark matter and Galactic cosmic rays (CR). From these effects we constrain the strength of the baryon–dark matter interactions through the ratio of baryon – dark matter interaction cross section to dark matter mass, $s$. We find that BBN places a weak upper limit to this ratio $

  21. Can I say something about general relativity, spacetime and geometry here? (I left this same comment under “general relativity as a tool” as I attempt to make my way around so please bear with me, sorta new.)
    Has anyone noticed that general relativity does not jive with vortex dynamics, even though the sun and planets follow the laws of vortex dynamics? We all know that the planets are orbiting the sun in a counterclockwise fashion with the sun as a foci. Consider any two of the planets as Mass A and Mass B (just two to simplify this but any number of masses will do.) Vortex dynamics says that the two masses orbit around a foci because they are caught in each other’s flow fields, and the foci is the RESULT of them being in each other’s flow fields. If you were to take away the two rotating masses then the foci between them would also disappear and in fact, the foci would not exist in the first place without the two orbiting masses that create it.
    General relativity says the sun bends spacetime and gravity is the result, but according to the actual engineering law covering the motion of the sun and the planets, the PLANETS (masses A and B) create gravity because they are caught in each other’s flow fields and the foci between them, the sun, is the RESULT of this mutual attraction between the planets. And since vortex dynamics are LAW and general relativity is THEORY this should be taken as a serious flaw in how we view the solar system.
    James Vanyo’s book ROTATING FLUIDS IN ENGINEERING AND SCIENCE has a great chapter on vortex dynamics.
    BK (reached at a cool little lady’s email joanbayles@hotmail.com)
    I could go on with how vortex dynamics correlates with superposition and entanglement but I’ll wait to see if anyone’s interested, lest I overstay my welcome.

  22. In response to Scott’s comments about the power spectrum and foreground removal (#35).

    We do freeze the power spectrum before we do the model fits. We basically
    spent 2 years modifying the pipeline so that we could treat the noise properly
    and pass various null tests and the self-consistency tests. We didn’t run
    any serious cosmological models until about 3 months ago.

    The foreground model for the temperature was indeed
    fixed before the power spectrum was computed. The new foreground
    model has only 2 free parameters— the big change was switching
    from using the Haslam 408 MHz map to model the foreground
    to using an internal combination of WMAP data (22 GHz-30 GHz).
    When we were testing the foreground models, we used
    the difference between 40 and 60 GHz power spectra as a test of the foreground model. Since this difference contains no CMB signal,
    this fitting scheme is unbiased

    The shift in l =3 and l=5 is due to switching from using the
    MASTER algorithm to using Maximum Likelihood. George Efsthathiou
    wrote a nice paper discussing this issue and we were convinced to
    use ML analysis on the low l’s.

    The improvement in chisq’ed was due to several effects:

    – better beams– these were fit to Jupiter and had no free parameters
    to “tweak”

    – using smaller pixels in the map making

    – an improved foreground model

    I should note that we also have done blind tests on model fitting.

    When we started this project, I never expected the data to fit the model
    (I still dislike the cosmological constant), but we have to present what we find.

Comments are closed.

Scroll to Top