Galaxies That Are Too Big To Fail, But Fail Anyway

Dark matter exists, but there is still a lot we don’t know about it. Presumably it’s some kind of particle, but we don’t know how massive it is, what forces it interacts with, or how it was produced. On the other hand, there’s actually a lot we do know about the dark matter. We know how much of it there is; we know roughly where it is; we know that it’s “cold,” meaning that the average particle’s velocity is much less than the speed of light; and we know that dark matter particles don’t interact very strongly with each other. Which is quite a bit of knowledge, when you think about it.

Fortunately, astronomers are pushing forward to study how dark matter behaves as it’s scattered through the universe, and the results are interesting. We start with a very basic idea: that dark matter is cold and completely non-interacting, or at least has interactions (the strength with which dark matter particles scatter off of each other) that are too small to make any noticeable difference. This is a well-defined and predictive model: ΛCDM, which includes the cosmological constant (Λ) as well as the cold dark matter (CDM). We can compare astronomical observations to ΛCDM predictions to see if we’re on the right track.

At first blush, we are very much on the right track. Over and over again, new observations come in that match the predictions of ΛCDM. But there are still a few anomalies that bug us, especially on relatively small (galaxy-sized) scales.

One such anomaly is the “too big to fail” problem. The idea here is that we can use ΛCDM to make quantitative predictions concerning how many galaxies there should be with different masses. For example, the Milky Way is quite a big galaxy, and it has smaller satellites like the Magellanic Clouds. In ΛCDM we can predict how many such satellites there should be, and how massive they should be. For a long time we’ve known that the actual number of satellites we observe is quite a bit smaller than the number predicted — that’s the “missing satellites” problem. But this has a possible solution: we only observe satellite galaxies by seeing stars and gas in them, and maybe the halos of dark matter that would ordinarily support such galaxies get stripped of their stars and gas by interacting with the host galaxy. The too big to fail problem tries to sharpen the issue, by pointing out that some of the predicted galaxies are just so massive that there’s no way they could not have visible stars. Or, put another way: the Milky Way does have some satellites, as do other galaxies; but when we examine these smaller galaxies, they seem to have a lot less dark matter than the simulations would predict.

Still, any time you are concentrating on galaxies that are satellites of other galaxies, you rightly worry that complicated interactions between messy atoms and photons are getting in the way of the pristine elegance of the non-interacting dark matter. So we’d like to check that this purported problem exists even out “in the field,” with lonely galaxies far away from big monsters like the Milky Way.

A new paper claims that yes, there is a too-big-to-fail problem even for galaxies in the field.

Is there a “too big to fail” problem in the field?
Emmanouil Papastergis, Riccardo Giovanelli, Martha P. Haynes, Francesco Shankar

We use the Arecibo Legacy Fast ALFA (ALFALFA) 21cm survey to measure the number density of galaxies as a function of their rotational velocity, Vrot,HI (as inferred from the width of their 21cm emission line). Based on the measured velocity function we statistically connect galaxies with their host halos, via abundance matching. In a LCDM cosmology, low-velocity galaxies are expected to be hosted by halos that are significantly more massive than indicated by the measured galactic velocity; allowing lower mass halos to host ALFALFA galaxies would result in a vast overestimate of their number counts. We then seek observational verification of this predicted trend, by analyzing the kinematics of a literature sample of field dwarf galaxies. We find that galaxies with Vrot,HI<25 km/s are kinematically incompatible with their predicted LCDM host halos, in the sense that hosts are too massive to be accommodated within the measured galactic rotation curves. This issue is analogous to the "too big to fail" problem faced by the bright satellites of the Milky Way, but here it concerns extreme dwarf galaxies in the field. Consequently, solutions based on satellite-specific processes are not applicable in this context. Our result confirms the findings of previous studies based on optical survey data, and addresses a number of observational systematics present in these works. Furthermore, we point out the assumptions and uncertainties that could strongly affect our conclusions. We show that the two most important among them, namely baryonic effects on the abundances and rotation curves of halos, do not seem capable of resolving the reported discrepancy.

Here is the money plot from the paper:

toobigtofail

The horizontal axis is the maximum circular velocity, basically telling us the mass of the halo; the vertical axis is the observed velocity of hydrogen in the galaxy. The blue line is the prediction from ΛCDM, while the dots are observed galaxies. Now, you might think that the blue line is just a very crappy fit to the data overall. But that’s okay; the points represent upper limits in the horizontal direction, so points that lie below/to the right of the curve are fine. It’s a statistical prediction: ΛCDM is predicting how many galaxies we have at each mass, even if we don’t think we can confidently measure the mass of each individual galaxy. What we see, however, is that there are a bunch of points in the bottom left corner that are above the line. ΛCDM predicts that even the smallest galaxies in this sample should still be relatively massive (have a lot of dark matter), but that’s not what we see.

If it holds up, this result is really intriguing. ΛCDM is a nice, simple starting point for a theory of dark matter, but it’s also kind of boring. From a physicist’s point of view, it would be much more fun if dark matter particles interacted noticeably with each other. We have plenty of ideas, including some of my favorites like dark photons and dark atoms. It is very tempting to think that observed deviations from the predictions of ΛCDM are due to some interesting new physics in the dark sector.

Which is why, of course, we should be especially skeptical. Always train your doubt most strongly on those ideas that you really want to be true. Fortunately there is plenty more to be done in terms of understanding the distribution of galaxies and dark matter, so this is a very solvable problem — and a great opportunity for learning something profound about most of the matter in the universe.

Posted in arxiv, Science | 18 Comments

Particle Fever on iTunes

mza_592945757694281252.227x227-75 The documentary film Particle Fever, directed by Mark Levinson and produced by physicist David Kaplan, opened a while back and has been playing on and off in various big cities. But it’s still been out of reach for many people who don’t happen to be lucky enough to live near a theater enlightened enough to play it. No more!

The movie has just been released on iTunes, so now almost everyone can watch it from the comfort of their own computer. And watch it you should — it’s a fascinating and enlightening glimpse into the world of modern particle physics, focusing on the Large Hadron Collider and the discovery of the Higgs boson. That’s not just my bias talking, either — the film is rated 95% “fresh” on RottenTomatoes.com, which represents an amazingly strong critical consensus. (Full disclosure: I’m not in it, and I had nothing to do with making it.)

Huge kudos to Mark (who went to grad school in physics before becoming a filmmaker) and David (who did a brief stint as an undergraduate film major before switching to physics) for pulling this off. It’s great for the public appreciation of science, but it’s also just an extremely enjoyable movie, no matter what your background is. Watch it with a friend!

Posted in Science and the Media | 13 Comments

Why the Many-Worlds Formulation of Quantum Mechanics Is Probably Correct

universe-splitter I have often talked about the Many-Worlds or Everett approach to quantum mechanics — here’s an explanatory video, an excerpt from From Eternity to Here, and slides from a talk. But I don’t think I’ve ever explained as persuasively as possible why I think it’s the right approach. So that’s what I’m going to try to do here. Although to be honest right off the bat, I’m actually going to tackle a slightly easier problem: explaining why the many-worlds approach is not completely insane, and indeed quite natural. The harder part is explaining why it actually works, which I’ll get to in another post.

Any discussion of Everettian quantum mechanics (“EQM”) comes with the baggage of pre-conceived notions. People have heard of it before, and have instinctive reactions to it, in a way that they don’t have to (for example) effective field theory. Hell, there is even an app, universe splitter, that lets you create new universes from your iPhone. (Seriously.) So we need to start by separating the silly objections to EQM from the serious worries.

The basic silly objection is that EQM postulates too many universes. In quantum mechanics, we can’t deterministically predict the outcomes of measurements. In EQM, that is dealt with by saying that every measurement outcome “happens,” but each in a different “universe” or “world.” Say we think of Schrödinger’s Cat: a sealed box inside of which we have a cat in a quantum superposition of “awake” and “asleep.” (No reason to kill the cat unnecessarily.) Textbook quantum mechanics says that opening the box and observing the cat “collapses the wave function” into one of two possible measurement outcomes, awake or asleep. Everett, by contrast, says that the universe splits in two: in one the cat is awake, and in the other the cat is asleep. Once split, the universes go their own ways, never to interact with each other again.

Branching wave function

And to many people, that just seems like too much. Why, this objection goes, would you ever think of inventing a huge — perhaps infinite! — number of different universes, just to describe the simple act of quantum measurement? It might be puzzling, but it’s no reason to lose all anchor to reality.

To see why objections along these lines are wrong-headed, let’s first think about classical mechanics rather than quantum mechanics. And let’s start with one universe: some collection of particles and fields and what have you, in some particular arrangement in space. Classical mechanics describes such a universe as a point in phase space — the collection of all positions and velocities of each particle or field.

What if, for some perverse reason, we wanted to describe two copies of such a universe (perhaps with some tiny difference between them, like an awake cat rather than a sleeping one)? We would have to double the size of phase space — create a mathematical structure that is large enough to describe both universes at once. In classical mechanics, then, it’s quite a bit of work to accommodate extra universes, and you better have a good reason to justify putting in that work. (Inflationary cosmology seems to do it, by implicitly assuming that phase space is already infinitely big.)

That is not what happens in quantum mechanics. The capacity for describing multiple universes is automatically there. We don’t have to add anything.

UBC_SuperpositionThe reason why we can state this with such confidence is because of the fundamental reality of quantum mechanics: the existence of superpositions of different possible measurement outcomes. In classical mechanics, we have certain definite possible states, all of which are directly observable. It will be important for what comes later that the system we consider is microscopic, so let’s consider a spinning particle that can have spin-up or spin-down. (It is directly analogous to Schrödinger’s cat: cat=particle, awake=spin-up, asleep=spin-down.) Classically, the possible states are

“spin is up”

or

“spin is down”.

Quantum mechanics says that the state of the particle can be a superposition of both possible measurement outcomes. It’s not that we don’t know whether the spin is up or down; it’s that it’s really in a superposition of both possibilities, at least until we observe it. We can denote such a state like this:

(“spin is up” + “spin is down”).

While classical states are points in phase space, quantum states are “wave functions” that live in something called Hilbert space. Hilbert space is very big — as we will see, it has room for lots of stuff.

To describe measurements, we need to add an observer. It doesn’t need to be a “conscious” observer or anything else that might get Deepak Chopra excited; we just mean a macroscopic measuring apparatus. It could be a living person, but it could just as well be a video camera or even the air in a room. To avoid confusion we’ll just call it the “apparatus.”

In any formulation of quantum mechanics, the apparatus starts in a “ready” state, which is a way of saying “it hasn’t yet looked at the thing it’s going to observe” (i.e., the particle). More specifically, the apparatus is not entangled with the particle; their two states are independent of each other. So the quantum state of the particle+apparatus system starts out like this: Continue reading

Posted in Philosophy, Science | 235 Comments

Quantum Mechanics Open Course from MIT

Kids today don’t know how good they have it. Back when I was learning quantum mechanics, the process involved steps like “going to lectures.” Not only did that require physical movement from the comfort of one’s home to dilapidated lecture halls, but — get this — you actually had to be there at some pre-arranged time! Often early in the morning.

These days, all you have to do is fire up the YouTube and watch lectures on your own time. MIT has just released an entire undergraduate quantum course, lovingly titled “8.04″ because that’s how MIT rolls. The prof is Allan Adams, who is generally a fantastic lecturer — so I’m suspecting these are really good even though I haven’t actually watched them all myself. Here’s the first lecture, “Introduction to Superposition.”

Allan’s approach in this video is actually based on the first two chapters of Quantum Mechanics and Experience by philosopher David Albert. I’m sure this will be very disconcerting to the philosophy-skeptics haunting the comment section of the previous post.

This is just one of many great physics courses online; I’ve previously noted Lenny Susskind’s GR course. But, being largely beyond my course-taking days myself, I haven’t really kept track. Feel free to suggest your favorites in the comments.

Posted in Science | 16 Comments

Physicists Should Stop Saying Silly Things about Philosophy

The last few years have seen a number of prominent scientists step up to microphones and belittle the value of philosophy. Stephen Hawking, Lawrence Krauss, and Neil deGrasse Tyson are well-known examples. To redress the balance a bit, philosopher of physics Wayne Myrvold has asked some physicists to explain why talking to philosophers has actually been useful to them. I was one of the respondents, and you can read my entry at the Rotman Institute blog. I was going to cross-post my response here, but instead let me try to say the same thing in different words.

Roughly speaking, physicists tend to have three different kinds of lazy critiques of philosophy: one that is totally dopey, one that is frustratingly annoying, and one that is deeply depressing.

  • “Philosophy tries to understand the universe by pure thought, without collecting experimental data.”

This is the totally dopey criticism. Yes, most philosophers do not actually go out and collect data (although there are exceptions). But it makes no sense to jump right from there to the accusation that philosophy completely ignores the empirical information we have collected about the world. When science (or common-sense observation) reveals something interesting and important about the world, philosophers obviously take it into account. (Aside: of course there are bad philosophers, who do all sorts of stupid things, just as there are bad practitioners of every field. Let’s concentrate on the good ones, of whom there are plenty.)

Philosophers do, indeed, tend to think a lot. This is not a bad thing. All of scientific practice involves some degree of “pure thought.” Philosophers are, by their nature, more interested in foundational questions where the latest wrinkle in the data is of less importance than it would be to a model-building phenomenologist. But at its best, the practice of philosophy of physics is continuous with the practice of physics itself. Many of the best philosophers of physics were trained as physicists, and eventually realized that the problems they cared most about weren’t valued in physics departments, so they switched to philosophy. But those problems — the basic nature of the ultimate architecture of reality at its deepest levels — are just physics problems, really. And some amount of rigorous thought is necessary to make any progress on them. Shutting up and calculating isn’t good enough.

  • “Philosophy is completely useless to the everyday job of a working physicist.”

Now we have the frustratingly annoying critique. Because: duh. If your criterion for “being interesting or important” comes down to “is useful to me in my work,” you’re going to be leading a fairly intellectually impoverished existence. Nobody denies that the vast majority of physics gets by perfectly well without any input from philosophy at all. (“We need to calculate this loop integral! Quick, get me a philosopher!”) But it also gets by without input from biology, and history, and literature. Philosophy is interesting because of its intrinsic interest, not because it’s a handmaiden to physics. I think that philosophers themselves sometimes get too defensive about this, trying to come up with reasons why philosophy is useful to physics. Who cares?

Nevertheless, there are some physics questions where philosophical input actually is useful. Foundational questions, such as the quantum measurement problem, the arrow of time, the nature of probability, and so on. Again, a huge majority of working physicists don’t ever worry about these problems. But some of us do! And frankly, if more physicists who wrote in these areas would make the effort to talk to philosophers, they would save themselves from making a lot of simple mistakes.

  • “Philosophers care too much about deep-sounding meta-questions, instead of sticking to what can be observed and calculated.”

Finally, the deeply depressing critique. Here we see the unfortunate consequence of a lifetime spent in an academic/educational system that is focused on taking ambitious dreams and crushing them into easily-quantified units of productive work. The idea is apparently that developing a new technique for calculating a certain wave function is an honorable enterprise worthy of support, while trying to understand what wave functions actually are and how they capture reality is a boring waste of time. I suspect that a substantial majority of physicists who use quantum mechanics in their everyday work are uninterested in or downright hostile to attempts to understand the quantum measurement problem.

This makes me sad. I don’t know about all those other folks, but personally I did not fall in love with science as a kid because I was swept up in the romance of finding slightly more efficient calculational techniques. Don’t get me wrong — finding more efficient calculational techniques is crucially important, and I cheerfully do it myself when I think I might have something to contribute. But it’s not the point — it’s a step along the way to the point.

The point, I take it, is to understand how nature works. Part of that is knowing how to do calculations, but another part is asking deep questions about what it all means. That’s what got me interested in science, anyway. And part of that task is understanding the foundational aspects of our physical picture of the world, digging deeply into issues that go well beyond merely being able to calculate things. It’s a shame that so many physicists don’t see how good philosophy of science can contribute to this quest. The universe is much bigger than we are and stranger than we tend to imagine, and I for one welcome all the help we can get in trying to figure it out.

Posted in Philosophy, Science | 225 Comments

Quantum Mechanics In Your Face

(Title shamelessly stolen from Sidney Coleman.) I’m back after a bit of insane traveling, looking forward to resuming regular blogging next week. Someone has to weigh in about BICEP, right?

In the meantime, here’s a video to keep you occupied: a recording of the World Science Festival panel on quantum mechanics I had previously mentioned.

David Albert is defending dynamical collapse formulations, Sheldon Goldstein stands up for hidden variables, I am promoting the many-worlds formulation, and Rüdiger Schack is in favor of QBism, a psi-epistemic approach. Brian Greene is the moderator, and has brought along some fancy animations. It’s an hour and a half of quantal goodness, so settle in for quite a ride.

Just as the panel was happening, my first official forays into quantum foundations were appearing on the arxiv: a paper with Charles Sebens on deriving the Born Rule in Everettian quantum mechanics, as well as a shorter conference proceeding.

No time to delve into the details here, but I promise to do so soon!

Posted in Science | 14 Comments

The Common Core: How Bill Gates Changed America

James Joyner points us to a Washington Post article on how Bill Gates somewhat single-handedly pulled off a dramatic restructuring of American public education, via promoting the Common Core standards. There is much that is fascinating here, including the fact that a billionaire with a plan can get things done in our fractured Republic a lot more easily than our actual governments (plural because education is still largely a local matter) ever could. Apparently, Gates got a pitch in 2008 from a pair of education reformers who wanted to see uniform standards for US schools. Gates thought about it, then jumped in with two feet (and a vast philanthropic and lobbying apparatus). Within two years, 45 states and the District of Columbia had fully adopted the Common Core Standards. The idea enjoyed bipartisan support; only quite recently, when members of the Tea Party realized that all this happened under Obama’s watch, have Republicans taken up the fight against it.

Personally, I’m completely in favor of national curricula and standards. Indeed, I’d like to go much further, and nationalize the schools, so that public spending on students in rural Louisiana is just as high as that in wealthy suburbs in the Northeast. I’m also not dead set against swift action by small groups of people who are willing to get things done, rather than sit around for decades trading white papers and town hall meetings. (I even helped a bit with such non-democratic action myself, and suffered the attendant abuse with stoic calm.)

What I don’t know, since I simply am completely unfamiliar with the details, is whether the actual Common Core initiative (as opposed to the general idea of a common curriculum) is a good idea. I know that some people are very much against it — so much so that it’s difficult to find actual information about it, since emotions run very high, and you are more likely to find either rampant boosterism or strident criticism. Of course you can look up what the standards are, both in English Language Arts and in Mathematics (there don’t seem to be standards for science, history, or social studies). But what you read is so vague as to be pretty useless. For example, the winningly-named “CCSS.ELA-LITERACY.CCRA.W.1” standard reads

Write arguments to support claims in an analysis of substantive topics or texts using valid reasoning and relevant and sufficient evidence.

That sounds like a good idea! But doesn’t translate unambiguously into something teachable. The devil is in the implementation.

So — anyone have any informed ideas about how it works in practice, and whether it’s helpful and realistic? (Early results seem to be mildly promising.) I worry from skimming some of the information that there seems to be an enormous emphasis on “assessment,” which presumably translates into standardized testing. I recognize the value of such testing in the right context, but also have the feeling that it’s already way overdone (in part because of No Child Left Behind), and the Common Core just adds another layer of requirements. I’d rather have students and schools spend more time on teaching and less time on testing, all else being equal.

Posted in Academia | 54 Comments

Quantum Mechanics Smackdown

Greetings from the Big Apple, where the World Science Festival got off to a swinging start with the announcement of the Kavli Prize winners. The local favorite will of course be the Astrophysics prize, which was awarded to Alan Guth, Andrei Linde, and Alexei Starobinsky for pioneering the theory of cosmic inflation. But we should also congratulate Nanoscience winners Thomas Ebbesen, Stefan Hell, and Sir John B. Pendry, as well as Neuroscience winners Brenda Milner, John O’Keefe, and Marcus E. Raichle.

I’m participating in several WSF events, and one of them tonight will be live-streamed in this very internet. The title is Measure for Measure: Quantum Physics and Reality, and we kick off at 8pm Eastern, 5pm Pacific. The live-stream is here, but I’ll also try to embed it and see how that goes:

The other participants are David Albert, Sheldon Goldstein, and Rüdiger Schack, with the conversation moderated by Brian Greene. The group is not merely a randomly-selected collection of people who know and love quantum mechanics; each participant was carefully chosen to defend a certain favorite version of this most mysterious of physical theories.

  • David Albert will propound the idea of dynamical collapse theories, such as the Ghirardi-Rimini-Weber (GRW) model. They posit that QM is truly stochastic, with wave functions really “collapsing” at unpredictable times, with a tiny rate that is negligible for individual particles but becomes rapid for macroscopic objects.
  • Shelly Goldstein will support some version of hidden-variable theories such as Bohmian mechanics. It’s sometimes thought that hidden variables have been ruled out by experimental tests of Bell’s inequalities, but that’s not right; only local hidden variables have been excluded. Non-local hidden variables are still very viable!
  • Rüdiger Schack will be telling us about a relatively new approach called Quantum Bayesianism, or QBism for short. (Don’t love the approach, but the nickname is awesome.) The idea here is that QM is really a theory about our ignorance of the world, similar to what Tom Banks defended here way back when.
  • My job, of course, will be to defend the honor of the Everett (many-worlds) formulation. I’ve done a lot less serious research on this issue than the other folks, but I will make up for that disadvantage by supporting the theory that is actually true. And coincidentally, by the time we’ve started debating I should have my first official paper on the foundations of QM appear on the arxiv: new work on deriving the Born Rule in Everett with Chip Sebens.

(For what it’s worth, I cannot resist quoting David Wallace in this context: when faced with the measurement problem in quantum mechanics, philosophers are eager to change the physics, while physicists are sure it’s just a matter of better philosophy.)

(Note also that both Steven Weinberg and Gerard ‘t Hooft have proposed new approaches to thinking about quantum mechanics. Neither of them were judged to be sufficiently distinguished to appear on our panel.)

It’s not accidental that I call these “formulations” rather than “interpretations” of quantum mechanics. I’d like to see people abandon the phrase “interpretation of quantum mechanics” entirely (though I often slip up and use it myself). The options listed above are not different interpretations of the same underlying structure — they are legitimately different physical theories, with potentially different experimental consequences (as our recent work on quantum fluctuations shows).

Relatedly, I discovered this morning that celebrated philosopher Hilary Putnam has joined the blogosphere, with the whimsically titled “Sardonic Comment.” His very first post shares an email conversation he had about the measurement problem in QM, including my co-panelists David and Shelly, and also Tim Maudlin and Roderich Tumulka (but not me). I therefore had the honor of leaving the very first comment on Hilary Putnam’s blog, encouraging him to bring more Everettians into the discussion!

Posted in Philosophy, Science | 53 Comments

The Meaning of Life

I have been a crappy blogger, and I blame real life for getting in the way. (No, that’s not the meaning of life.) I keep meaning to say something more substantial about the BICEP2 controversy — in the meantime check out Raphael Flauger’s talk, Matias Zaldarriaga’s talk (slides), this paper by Mortonson and Seljak, or this blog post by Richard Easther.

At least I have been a productive scientist! One paper on the expected amount of inflation with Grant Remmen, and one on the evolution of complexity in closed systems with Scott Aaronson and Lauren Ouellette (no relation to Jennifer). Promise to blog about them soon.

But not too soon, as I’m about to hop on airplanes again: first for the World Science Festival, then for the Cheltenham Science Festival. (Cheltenham is actually part of the world, but the two festivals are quite different.) Note that at the WSF, our session on Quantum Physics and Reality (with Brian Greene, David Albert, Sheldon Goldstein, and Ruediger Schack, Thursday at 8pm Eastern) will be live-streamed. Maybe the Science and Story event (with Steven Pinker, Jo Marchant, Joyce Carol Oates, and E.L. Doctorow, Thursday at 5:30 Eastern) will be also, I don’t know.

So, in lieu of original content, here is seven minutes of me pronouncing sonorously on the meaning of life. This is from a debate I participated in with Michael Shermer, Dinesh D’Souza, and Ian Hutchinson (not the Greer-Heard Forum debate with William Lane Craig, as I originally thought). I talked about how naturalists find meaning in our finite lives, without any guidance from the outside world.

I had nothing to do with the making of the video, and I have no idea where the visuals are from. It’s associated with The Inspiration Journey group on Facebook.

When I extend an kind of olive branch to believers, I do so in all sincerity. I unambiguously disagree with religious people on matters of fundamental ontology; but I recognize that we’re all just tiny little persons in a very big universe, trying our best to figure things out. And I’m firm in my conviction that we’re making progress.

Posted in Personal, Philosophy | 40 Comments

Arrrgh Rumors

Today’s hot issue in my favorite corners of the internet (at least, besides “What’s up with Solange?”) is the possibility that the BICEP2 discovery of the signature of gravitational waves in the CMB might not be right after all. At least, that’s the rumor, spread in this case by Adam Falkowski at Résonaances. The claim is that one of the methods used by the BICEP2 team to estimate its foregrounds (polarization induced by the galaxy and other annoying astrophysical stuff, rather than imprinted on the microwave background from early times) relied on a pdf image of data from the Planck satellite, and that image was misinterpreted.

culprit

Is it true? I have no idea. It could be. Or it could be completely inconsequential. (For a very skeptical take, see Sesh Nadathur.) It seems that this was indeed one of the methods used by BICEP2 to estimate foregrounds, but it wasn’t the only one. A big challenge for the collaboration is that BICEP2 only observes in one frequency of microwaves, which makes it very hard to distinguish signals from foregrounds. (Often you can take advantage of the fact that we know the frequency dependence of the CMB, and it’s different from that of the foregrounds — but not if you only measure one frequency.) As excited as we’ve all been about the discovery, it’s important to be cautious, especially when something dramatic has only been found by a single experiment. That’s why most of us have tried hard to include caveats like “if it holds up” every time we wax enthusiastic about what it all means.

However. I have no problem with the blog rumors — it’s great that social media enable scientists to examine and challenge results out in the open, rather than relying on being part of some in-crowd. The problem is when this perfectly normal chit-chat gets elevated to some kind of big news story. To unfairly single someone out, here’s Science NOW, with a headline “Blockbuster Big Bang Result May Fizzle, Rumor Suggests.” The evidence put forward for that fizzling is nothing but the Résonaances blog post, which consists in turn of some anonymous whispers. (Including the idea that “the BICEP team has now admitted to the mistake,” which the team has subsequently strongly denied.)

I would claim that is some bad journalism right there. (Somewhat more nuanced stories appeared at New Scientist and National Geographic.) If a reporter could talk to an actual CMB scientist, who would offer an informed opinion on the record that BICEP2 had made a mistake, that would be well worth reporting (along with the appropriate responses from the BICEP2 team itself). But an unsourced rumor on a blog isn’t news (not even from this blog!). As Peter Coles says, “Rational scepticism is a very good thing. It’s one of the things that makes science what it is. But it all too easily turns into mudslinging.”

We’re having a workshop on the CMB and inflation here at Caltech this weekend, featuring talks from representatives of both BICEP2 and Planck. I was going to wait to talk about this until I actually had some idea of what was going on, which hopefully that workshop will provide. Right now I have no idea what the answer is — I suspect the BICEP2 result is fine, as they did things other than just look at that one pdf file, but I don’t pretend to be an expert, and I’ll quickly change my mind if that’s what the evidence indicates. But other non-experts rely on the media to distinguish between what’s true and what’s merely being gossiped about, and this is an example where they could do a better job.

Posted in Science, Science and the Media | 22 Comments