Guest Post

Guest Post: Nicole Yunger Halpern on What Makes Extraordinary Science Extraordinary

Nicole Yunger Halpern is a theoretical physicist at Caltech’s Institute for Quantum Information and Matter (IQIM).  She blends quantum information theory with thermodynamics and applies the combination across science, including to condensed matter; black-hole physics; and atomic, molecular, and optical physics. She writes for Quantum Frontiers, the IQIM blog, every month.


What makes extraordinary science extraordinary?

Political junkies watch C-SPAN. Sports fans watch ESPN. Art collectors watch Christie’s. I watch scientists respond to ideas.

John Preskill—Caltech professor, quantum-information theorist, and my PhD advisor—serves as the Chief Justice John Roberts of my C-SPAN. Ideas fly during group meetings, at lunch outside a campus cafeteria, and in John’s office. Many ideas encounter a laconicism compared with which Ernest Hemingway babbles. “Hmm,” I hear. “Ok.” “Wait… What?”

The occasional idea provokes an “mhm.” The final syllable has a higher pitch than the first. Usually, the inflection change conveys agreement and interest. Receiving such an “mhm” brightens my afternoon like a Big Dipper sighting during a 9 PM trudge home.

Hearing “That’s cool,” “Nice,” or “I’m excited,” I cartwheel internally.

What distinguishes “ok” ideas from “mhm” ideas? Peeling the Preskillite trappings off this question reveals its core: What distinguishes good science from extraordinary science?

I’ve been grateful for opportunities to interview senior scientists, over the past few months, from coast to coast. The opinions I collected varied. Several interviewees latched onto the question as though they pondered it daily. A couple of interviewees balked (I don’t know; that’s tricky…) but summoned up a sermon. All the responses fired me up: The more wisps of mist withdrew from the nature of extraordinary science, the more I burned to contribute.

I’ll distill, interpret, and embellish upon the opinions I received. Italics flag lines that I assembled to capture ideas that I heard, as well as imperfect memories of others’ words. Quotation marks surround lines that others constructed. Feel welcome to chime in, in the “comments” section.

One word surfaced in all, or nearly all, my conversations: “impact.” Extraordinary science changes how researchers across the world think. Extraordinary science reaches beyond one subdiscipline.

This reach reminded me of answers to a question I’d asked senior scientists when in college: “What do you mean by ‘beautiful’?”  Replies had varied, but a synopsis had crystallized: “Beautiful science enables us to explain a lot with a little.” Schrodinger’s equation, which describes how quantum systems evolve, fits on one line. But the equation describes electrons bound to atoms, particles trapped in boxes, nuclei in magnetic fields, and more. Beautiful science, which overlaps with extraordinary science, captures much of nature in a small net.

Inventing a field constitutes extraordinary science. Examples include the fusion of quantum information with high-energy physics. Entanglement, quantum computation, and error correction are illuminating black holes, wormholes, and space-time.

Extraordinary science surprises us, revealing faces that we never expected nature to wear. Many extraordinary experiments generate data inexplicable with existing theories. Some extraordinary theory accounts for puzzling data; some extraordinary theory provokes experiments. I graduated from the Perimeter Scholars International Masters program,  at the Perimeter Institute for Theoretical Physics, almost five years ago. Canadian physicist Art McDonald presented my class’s commencement address. An interest in theory, he said, brought you to this institute. Plunge into theory, if you like. Theorem away. But keep a bead on experiments. Talk with experimentalists; work to understand them. McDonald won a Nobel Prize, two years later, for directing the Sudbury Neutrino Observatory (SNO). (SNOLab, with the Homestake experiment, revealed properties of subatomic particles called “neutrinos.” A neutrino’s species can change, and neutrinos have tiny masses. Neutrinos might reveal why the universe contains more matter than antimatter.)

Not all extraordinary theory clings to experiment like bubblegum to hair. Elliott Lieb and Mary Beth Ruskai proved that quantum entropies obey an inequality called “strong subadditivity” (SSA).  Entropies quantify uncertainty about which outcomes measurements will yield. Experimentalists could test SSA’s governance of atoms, ions, and materials. But no physical platform captures SSA’s essence.

Abstract mathematics underlies Lieb and Ruskai’s theorem: convexity and concavity (properties of functions), the Golden-Thompson inequality (a theorem about exponentials of matrices), etc. Some extraordinary theory dovetails with experiment; some wings away.

One interviewee sees extraordinary science in foundational science. At our understanding’s roots lie ideas that fertilize diverse sprouts. Other extraordinary ideas provide tools for calculating, observing, or measuring. Richard Feynman sped up particle-physics computations, for instance, by drawing diagrams.  Those diagrams depict high-energy physics as the creation, separation, recombination, and annihilation of particles. Feynman drove not only a technical, but also a conceptual, advance. Some extraordinary ideas transform our views of the world.

Difficulty preoccupied two experimentalists. An experiment isn’t worth undertaking, one said, if it isn’t difficult. A colleague, said another, “does the impossible and makes it look easy.”

Simplicity preoccupied two theorists. I wrung my hands, during year one of my PhD, in an email to John. The results I’d derived—now that I’d found them— looked as though I should have noticed them months earlier. What if the results lacked gristle? “Don’t worry about too simple,” John wrote back. “I like simple.”

Another theorist agreed: Simplification promotes clarity. Not all simple ideas “go the distance.” But ideas run farther when stripped down than when weighed down by complications.

Extraordinary scientists have a sense of taste. Not every idea merits exploration. Identifying the ideas that do requires taste, or style, or distinction. What distinguishes extraordinary science? More of the theater critic and Julia Child than I expected five years ago.

With gratitude to the thinkers who let me pick their brains.

Guest Post: Nicole Yunger Halpern on What Makes Extraordinary Science Extraordinary Read More »

8 Comments

A Response to “On the time lags of the LIGO signals” (Guest Post)

This is a special guest post by Ian Harry, postdoctoral physicist at the Max Planck Institute for Gravitational Physics, Potsdam-Golm. You may have seen stories about a paper that recently appeared, which called into question whether the LIGO gravitational-wave observatory had actually detected signals from inspiralling black holes, as they had claimed. Ian’s post is an informal response to these claims, on behalf of the LIGO Scientific Collaboration. He argues that there are data-analysis issues that render the new paper, by James Creswell et al., incorrect. Happily, there are sufficient online tools that this is a question that interested parties can investigate for themselves. Here’s Ian:


On 13 Jun 2017 a paper appeared on the arXiv titled “On the time lags of the LIGO signals” by Creswell et al. This paper calls into question the 5-sigma detection claim of GW150914 and following detections. In this short response I will refute these claims.

Who am I? I am a member of the LIGO collaboration. I work on the analysis of LIGO data, and for 10 years have been developing searches for compact binary mergers. The conclusions I draw here have been checked by a number of colleagues within the LIGO and Virgo collaborations. We are also in touch with the authors of the article to raise these concerns directly, and plan to write a more formal short paper for submission to the arXiv explaining in more detail the issues I mention below. In the interest of timeliness, and in response to numerous requests from outside of the collaboration, I am sharing these notes in the hope that they will clarify the situation.

In this article I will go into some detail to try to refute the claims of Creswell et al. Let me start though by trying to give a brief overview. In Creswell et al. the authors take LIGO data made available through the LIGO Open Science Data from the Hanford and Livingston observatories and perform a simple Fourier analysis on that data. They find the noise to be correlated as a function of frequency. They also perform a time-domain analysis and claim that there are correlations between the noise in the two observatories, which is present after removing the GW150914 signal from the data. These results are used to cast doubt on the reliability of the GW150914 observation. There are a number of reasons why this conclusion is incorrect: 1. The frequency-domain correlations they are seeing arise from the way they do their FFT on the filtered data. We have managed to demonstrate the same effect with simulated Gaussian noise. 2. LIGO analyses use whitened data when searching for compact binary mergers such as GW150914. When repeating the analysis of Creswell et al. on whitened data these effects are completely absent. 3. Our 5-sigma significance comes from a procedure of repeatedly time-shifting the data, which is not invalidated if correlations of the type described in Creswell et al. are present.

Section II: The curious case of the Fourier phase correlations?

The main result (in my opinion) from section II of Creswell et al. is Figure 3, which shows that, when one takes the Fourier transform of the LIGO data containing GW150914, and plots the Fourier phases as a function of frequency, one can see a clear correlation (ie. all the points line up, especially for the Hanford data). I was able to reproduce this with the LIGO Open Science Center data and a small ipython notebook. I make the ipython notebook available so that the reader can see this, and some additional plots, and reproduce this.

For Gaussian noise we would expect the Fourier phases to be distributed randomly (between -pi and pi). Clearly in the plot shown above, and in Creswell et al., this is not the case. However, the authors overlooked one critical detail here. When you take a Fourier transform of a time series you are implicitly assuming that the data are cyclical (i.e. that the first point is adjacent to the last point). For colored Gaussian noise this assumption will lead to a discontinuity in the data at the two end points, because these data are not causally connected. This discontinuity can be responsible for misleading plots like the one above.

To try to demonstrate this I perform two tests. First I whiten the colored LIGO noise by measuring the power spectral density (see the LOSC example, which I use directly in my ipython notebook, for some background on colored noise and noise power spectral density), then dividing the data in the Fourier domain by the power spectral density, and finally converting back to the time domain. This process will corrupt some data at the edges so after whitening we only consider the middle half of the data. Then we can make the same plot:

And we can see that there are now no correlations visible in the data. For white Gaussian noise there is no correlation between adjacent points, so no discontinuity is introduced when treating the data as cyclical. I therefore assert that Figure 3 of Creswell et al. actually has no meaning when generated using anything other than whitened data.

I would also like to mention that measuring the noise power spectral density of LIGO data can be challenging when the data are non-stationary and include spectral lines (as Creswell et al. point out). Therefore it can be difficult to whiten data in many circumstances. For the Livingston data some of the spectral lines are still clearly present after whitening (using the methods described in the LOSC example), and then mild correlations are present in the resulting plot (see ipython notebook). This is not indicative of any type of non-Gaussianity, but demonstrates that measuring the noise power-spectral density of LIGO noise is difficult, and, especially for parameter inference, a lot of work has been spent on answering this question.

To further illustrate that features like those seen in Figure 3 of Creswell et al. can be seen in known Gaussian noise I perform an additional check (suggested by my colleague Vivien Raymond). I generate a 128 second stretch of white Gaussian noise (using numpy.random.normal) and invert the whitening procedure employed on the LIGO data above to produce 128 seconds of colored Gaussian noise. Now the data, previously random, are ‘colored’ Coloring the data in the manner I did makes the full data set cyclical (the last point is correlated with the first) so taking the Fourier transform of the complete data set, I see the expected random distribution of phases (again, see the ipython notebook). However, If I select 32s from the middle of this data, introducing a discontinuity as I mention above, I can produce the following plot:

In other words, I can produce an even more extremely correlated example than on the real data, with actual Gaussian noise.

Section III: The data is strongly correlated even after removing the signal

The second half of Creswell et al. explores correlations between the data taken from Hanford and Livingston around GW150914. For me, the main conclusion here is communicated in Figure 7, where Creswell et al. claim that even after removal of the GW150914 best-fit waveform there is still correlation in the data between the two observatories. This is a result I have not been able to reproduce. Nevertheless, if such a correlation were present it would suggest that we have not perfectly subtracted the real signal from the data, which would not invalidate any detection claim. There could be any number of reasons for this, for example the fact that our best-fit waveform will not exactly match what is in the data as we cannot measure all parameters with infinite precision. There might also be some deviations because the waveform models we used, while very good, are only approximations to the real signal (LIGO put out a paper quantifying this possibility). Such deviations might also be indicative of a subtle deviation from general relativity. These are of course things that LIGO is very interested in pursuing, and we have published a paper exploring potential deviations from general relativity (finding no evidence for that), which includes looking for a residual signal in the data after subtraction of the waveform (and again finding no evidence for that).

Finally, LIGO runs “unmodelled” searches, which do not search for specific signals, but instead look for any coherent non-Gaussian behaviour in the observatories. These searches actually were the first to find GW150914, and did so with remarkably consistent parameters to the modelled searches, something which we would not expect to be true if the modelled searches are “missing” some parts of the signal.

With that all said I try to reproduce Figure 7. First I begin by cross-correlating the Hanford and Livingston data, after whitening and band-passing, in a very narrow 0.02s window around GW150914. This produces the following:

There is a clear spike here at 7ms (which is GW150914), with some expected “ringing” behaviour around this point. This is a much less powerful method to extract the signal than matched-filtering, but it is completely signal independent, and illustrates how loud GW150914 is. Creswell et al. however, do not discuss their normalization of this cross-correlation, or how likely a deviation like this is to occur from noise alone. Such a study would be needed before stating that this is significant—In this case we know this signal is significant from other, more powerful, tests of the data. Then I repeat this but after having removed the best-fit waveform from the data in both observatories (using only products made available in the LOSC example notebooks). This gives:

This shows nothing interesting at all.

Section IV: Why would such correlations not invalidate the LIGO detections?

Creswell et al. claim that correlations between the Hanford and Livingston data, which in their results appear to be maximized around the time delay reported for GW150914, raised questions on the integrity of the detection. They do not. The authors claim early on in their article that LIGO data analysis assumes that the data are Gaussian, independent and stationary. In fact, we know that LIGO data are neither Gaussian nor stationary and if one reads through the technical paper accompanying the detection PRL, you can read about the many tests we run to try to distinguish between non-Gaussianities in our data and real signals. But in doing such tests, we raise an important question: “If you see something loud, how can you be sure it is not some chance instrumental artifact, which somehow was missed in the various tests that you do”. Because of this we have to be very careful when assessing the significance (in terms of sigmas—or the p-value, to use the correct term). We assess the significance using a process called time-shifts. We first look through all our data to look for loud events within the 10ms time-window corresponding to the light travel time between the two observatories. Then we look again. Except the second time we look we shift ALL of the data from Livingston by 0.1s. This delay is much larger than the light travel time so if we see any interesting “events” now they cannot be genuine astrophysical events, but must be some noise transient. We then repeat this process with a 0.2s delay, 0.3s delay and so on up to time delays on the order of weeks long. In this way we’ve conducted of order 10 million experiments. For the case of GW150914 the signal in the non-time shifted data was louder than any event we saw in any of the time-shifted runs—all 10 million of them. In fact, it was still a lot louder than any event in the time-shifted runs as well. Therefore we can say that this is a 1-in-10-million event, without making any assumptions at all about our noise. Except one. The assumption is that the analysis with Livingston data shifted by e.g. 8s (or any of the other time shifts) is equivalent to the analysis with the Livingston data not shifted at all. Or, in other words, we assume that there is nothing special about the non-time shifted analysis (other than it might contain real signals!). As well as the technical papers, this is also described in the science summary that accompanied the GW150914 PRL.

Nothing in the paper “On the time lags of the LIGO signals” suggests that the non-time shifted analysis is special. The claimed correlations between the two detectors due to resonance and calibration lines in the data would be present also in the time-shifted analyses—The calibration lines are repetitive lines, and so if correlated in the non-time shift analyses, they will also be correlated in the time-shift analyses as well. I should also note that potential correlated noise sources was explored in another of the companion papers to the GW150914 PRL. Therefore, taking the results of this paper at face value, I see nothing that calls into question the “integrity” of the GW150914 detection.

Section V: Wrapping up

I have tried to reproduce the results quoted in “On the time lags of the LIGO signals”. I find the claims of section 2 are due to an issue in how the data is Fourier transformed, and do not reproduce the correlations claimed in section 3. Even if taking the results at face value, it would not affect the 5-sigma confidence associated with GW150914. Nevertheless I am in contact with the authors and we will try to understand these discrepancies.

For people interested in trying to explore LIGO data, check out the LIGO Open Science Center tutorials. As someone who was involved in the internal review of the LOSC data products it is rewarding to see these materials being used. It is true that these tutorials are intended as an introduction to LIGO data analysis, and do not accurately reflect many of the intricacies of these studies. For the interested reader a number of technical papers, for example this one, accompany the main PRL and within this paper and its references you can find all the nitty-gritty about how our analyses work. Finally, the PyCBC analysis toolkit, which was used to obtain the 5-sigma confidence, and of which I am one of the primary developers, is available open-source on git-hub. There are instructions here and also a number of examples that illustrate a number of aspects of our data analysis methods.

This article was circulated in the LIGO-Virgo Public Outreach and Education mailing list before being made public, and I am grateful to comments and feedback from: Christopher Berry, Ofek Birnholtz, Alessandra Buonanno, Gregg Harry, Martin Hendry, Daniel Hoak, Daniel Holz, David Keitel, Andrew Lundgren, Harald Pfeiffer, Vivien Raymond, Jocelyn Read and David Shoemaker.

A Response to “On the time lags of the LIGO signals” (Guest Post) Read More »

15 Comments

Guest Post: Nathan Moynihan on Amplitudes for Astrophysicists

As someone who sits at Richard Feynman’s old desk, I take Feynman diagrams very seriously. They are a very convenient and powerful way of answering a certain kind of important physical question: given some set of particles coming together to interact, what is the probability that they will evolve into some specific other set of particles?

Unfortunately, actual calculations with Feynman diagrams can get unwieldy. The answers they provide are only approximate (though the approximations can be very good), and making the approximations just a little more accurate can be a tremendous amount of work. Enter the “amplitudes program,” a set of techniques for calculating these scattering probabilities more directly, without adding together a barrel full of Feynman diagrams. This isn’t my own area, but we’ve  had guest posts from Lance Dixon and  Jaroslav Trnka about this subject a while back.

But are these heady ideas just brain candy for quantum field theorists, or can they be applied more widely? A very interesting new paper just came out that argued that even astrophysicists — who usually deal with objects a lot bigger than a few colliding particles — can put amplitude technology to good use! And we’re very fortunate to have a guest post on the subject by one of the authors, Nathan Moynihan. Nathan is a grad student at the University of Cape Town, studying under Jeff Murugan and Amanda Weltman, who works on quantum field theory, gravity, and information. This is a great introduction to the application of cutting-edge mathematical physics to some (relatively) down-to-Earth phenomena.


In a recent paper, my collaborators and I (Daniel Burger, Raul Carballo-Rubio, Jeff Murugan and Amanda Weltman) make a case for applying modern methods in scattering amplitudes to astrophysical and cosmological scenarios. In this post, I would like to explain why I think this is interesting, and why you, if you’re an astrophysicist or cosmologist, might want to use the techniques we have outlined.

In a scattering experiment, objects of known momentum p are scattered from a localised target, with various possible outcomes predicted by some theory. In quantum mechanics, the probability of a particular outcome is given by the square of the scattering amplitude, a complex number that can be derived directly from the theory. Scattering amplitudes are the central quantities of interest in (perturbative) quantum field theory, and in the last 15 years or so, there has been something of a minor revolution surrounding the tools used to calculate these quantities (partially inspired by the introduction of twistors into string theory). In general, a particle theorist will make a perturbative expansion of the path integral of her favorite theory into Feynman diagrams, and mechanically use the Feynman rules of the theory to calculate each diagram’s contribution to the final amplitude. This approach works perfectly well, although the calculations are often tough, depending on the theory.

Astrophysicists, on the other hand, are often not concerned too much with quantum field theories, preferring to work with the classical theory of general relativity. However, it turns out that you can, in fact, do the same thing with general relativity: perturbatively write down the Feynman diagrams and calculate scattering amplitudes, at least to first or second order. One of the simplest scattering events you can imagine in pure gravity is that of two gravitons scattering off one another: you start with two gravitons, they interact, you end up with two gravitons. It turns out that you can calculate this using Feynman diagrams, and the answer turns out to be strikingly simple, being barely one line.

The calculation, on the other hand, is utterly vicious. An unfortunate PhD student by the name of Walter G Wesley was given this monstrous task in 1963, and he found that to calculate the amplitude meant evaluating over 500 terms, which as you can imagine took the majority of his PhD to complete (no Mathematica!). The answer, in the end, is breathtakingly simple. In the centre of mass frame, the cross section for this amplitude is:

\frac{d\sigma}{d\Omega} = 4G^2E^2 \frac{\cos^{12}\frac12\theta}{\sin^{4}\frac12\theta}

Where G is Newton’s constant, E is the energy of the gravitons, and \theta is the scattering angle. The fact that they are so cumbersome has meant that many in the astrophysics community may eschew calculations of this type.

However, the fact that the answer is so simple implies that there may be an easier route to calculation than evaluating all 500 diagrams. Indeed, this is exactly what we have tried to allude to in our paper: using the methods we have outlined, this calculation can be done on half a page and with little effort.

The technology that we introduce can be summed up as follows and should be easily recognised as common tools present in any physicists arsenal: a change of variables (for the more mathematically inclined reader, a change of representation), recursion relations and complex analysis. Broadly speaking, the idea is to take the basic components of Feynman diagrams: momentum vectors and polarisation vectors (or tensors) and to represent them as spinors, objects that are inherently complex in nature (they are elements of a complex vector space). Once we do that, we can utilise the transformation rules of spinors to simplify calculations. This simplifies things a bit, but we can do better.

The next trick is to observe that Feynman diagrams all have one common feature: they contain real singularities (meaning the amplitude blows up) wherever there are physical internal lines. These singularities usually show up as poles, functions that behave like 1/z^n. Typically, an internal line might contribute a factor like \frac{1}{p^2 + m^2}, which obviously blows up around p^2 = -m^2.

We normally make ourselves feel better by taking these internal lines to be virtual, meaning they don’t correspond to physical processes that satisfy the energy-momentum condition p^2 = -m^2 and thus never blow up. In contrast to this, the modern formulation insists that internal lines do satisfy the condition, but that the pole is complex. Now that we have complex poles, we can utilise the standard tools of complex analysis, which tells us that if you know about the poles and residues, you know everything. For this to work, we are required to insist that at least some of external momentum is complex, since the internal momentum depends on the external momentum. Thankfully, we can do this in such a way that momentum conservation holds and that the square of the momentum is still physical.

The final ingredient we need are known as the BCFW recursion relations, an indispensable tool used by the amplitudes community developed by Britto, Cachazo, Feng and Witten in 2005. Roughly speaking, these relations tell us that we can turn a complex, singular, on-shell amplitude of any number of particles into a product of 3-particle amplitudes glued together by poles. Essentially, this means we can treat amplitudes like lego bricks and stick them together in an intelligent way in order to construct a really-difficult-to-compute amplitude from some relatively simple ones.

In the paper, we show how this can be achieved using the example of scattering a graviton off a scalar. This interaction is interesting since it’s a representation of a gravitational wave being ‘bent’ by the gravitational field of a massive object like a star. We show, in a couple of pages, that the result calculated using these methods exactly corresponds with what you would calculate using general relativity.

If you’re still unconvinced by the utility of what I’ve outlined, then do look out for the second paper in the series, hopefully coming in the not too distant future. Whether you’re a convert or not, our hope is that these methods might be useful to the astrophysics and cosmology communities in the future, and I would welcome any comments from any members of those communities.

Guest Post: Nathan Moynihan on Amplitudes for Astrophysicists Read More »

6 Comments

Guest Post: Grant Remmen on Entropic Gravity

Grant Remmen“Understanding quantum gravity” is on every physicist’s short list of Big Issues we would all like to know more about. If there’s been any lesson from last half-century of serious work on this problem, it’s that the answer is likely to be something more subtle than just “take classical general relativity and quantize it.” Quantum gravity doesn’t seem to be an ordinary quantum field theory.

In that context, it makes sense to take many different approaches and see what shakes out. Alongside old stand-bys such as string theory and loop quantum gravity, there are less head-on approaches that try to understand how quantum gravity can really be so weird, without proposing a specific and complete model of what it might be.

Grant Remmen, a graduate student here at Caltech, has been working with me recently on one such approach, dubbed entropic gravity. We just submitted a paper entitled “What Is the Entropy in Entropic Gravity?” Grant was kind enough to write up this guest blog post to explain what we’re talking about.

Meanwhile, if you’re near Pasadena, Grant and his brother Cole have written a musical, Boldly Go!, which will be performed at Caltech in a few weeks. You won’t want to miss it!


One of the most exciting developments in theoretical physics in the past few years is the growing understanding of the connections between gravity, thermodynamics, and quantum entanglement. Famously, a complete quantum mechanical theory of gravitation is difficult to construct. However, one of the aspects that we are now coming to understand about quantum gravity is that in the final theory, gravitation and even spacetime itself will be closely related to, and maybe even emergent from, the mysterious quantum mechanical property known as entanglement.

This all started several decades ago, when Hawking and others realized that black holes behave with many of the same aspects as garden-variety thermodynamic systems, including temperature, entropy, etc. Most importantly, the black hole’s entropy is equal to its area [divided by (4 times Newton’s constant)]. Attempts to understand the origin of black hole entropy, along with key developments in string theory, led to the formulation of the holographic principle – see, for example, the celebrated AdS/CFT correspondence – in which quantum gravitational physics in some spacetime is found to be completely described by some special non-gravitational physics on the boundary of the spacetime. In a nutshell, one gets a gravitational universe as a “hologram” of a non-gravitational universe.

If gravity can emerge from, or be equivalent to, a set of physical laws without gravity, then something special about that non-gravitational physics has to make it happen. Physicists have now found that that special something is quantum entanglement: the special correlations among quantum mechanical particles that defies classical description. As a result, physicists are very interested in how to get the dynamics describing how spacetime is shaped and moves – Einstein’s equation of general relativity – from various properties of entanglement. In particular, it’s been suggested that the equations of gravity can be shown to come from some notion of entropy. As our universe is quantum mechanical, we should think about the entanglement entropy, a measure of the degree of correlation of quantum subsystems, which for thermal states matches the familiar thermodynamic notion of entropy.

The general idea is as follows: Inspired by black hole thermodynamics, suppose that there’s some more general notion, in which you choose some region of spacetime, compute its area, and find that when its area changes this is associated with a change in entropy. (I’ve been vague here as to what is meant by a “change” in the area and what system we’re computing the area of – this will be clarified soon!) Next, you somehow relate the entropy to an energy (e.g., using thermodynamic relations). Finally, you write the change in area in terms of a change in the spacetime curvature, using differential geometry. Putting all the pieces together, you get a relation between an energy and the curvature of spacetime, which if everything goes well, gives you nothing more or less than Einstein’s equation! This program can be broadly described as entropic gravity and the idea has appeared in numerous forms. With the plethora of entropic gravity theories out there, we realized that there was a need to investigate what categories they fall into and whether their assumptions are justified – this is what we’ve done in our recent work.

In particular, there are two types of theories in which gravity is related to (entanglement) entropy, which we’ve called holographic gravity and thermodynamic gravity in our paper. The difference between the two is in what system you’re considering, how you define the area, and what you mean by a change in that area.

In holographic gravity, you consider a region and define the area as that of its boundary, then consider various alternate configurations and histories of the matter in that region to see how the area would be different. Recent work in AdS/CFT, in which Einstein’s equation at linear order is equivalent to something called the “entanglement first law”, falls into the holographic gravity category. This idea has been extended to apply outside of AdS/CFT by Jacobson (2015). Crucially, Jacobson’s idea is to apply holographic mathematical technology to arbitrary quantum field theories in the bulk of spacetime (rather than specializing to conformal field theories – special physical models – on the boundary as in AdS/CFT) and thereby derive Einstein’s equation. However, in this work, Jacobson needed to make various assumptions about the entanglement structure of quantum field theories. In our paper, we showed how to justify many of those assumptions, applying recent results derived in quantum field theory (for experts, the form of the modular Hamiltonian and vacuum-subtracted entanglement entropy on null surfaces for general quantum field theories). Thus, we are able to show that the holographic gravity approach actually seems to work!

On the other hand, thermodynamic gravity is of a different character. Though it appears in various forms in the literature, we focus on the famous work of Jacobson (1995). In thermodynamic gravity, you don’t consider changing the entire spacetime configuration. Instead, you imagine a bundle of light rays – a lightsheet – in a particular dynamical spacetime background. As the light rays travel along – as you move down the lightsheet – the rays can be focused by curvature of the spacetime. Now, if the bundle of light rays started with a particular cross-sectional area, you’ll find a different area later on. In thermodynamic gravity, this is the change in area that goes into the derivation of Einstein’s equation. Next, one assumes that this change in area is equivalent to an entropy – in the usual black hole way with a factor of 1/(4 times Newton’s constant) – and that this entropy can be interpreted thermodynamically in terms of an energy flow through the lightsheet. The entropy vanishes from the derivation and the Einstein equation almost immediately appears as a thermodynamic equation of state. What we realized, however, is that what the entropy is actually the entropy of was ambiguous in thermodynamic gravity. Surprisingly, we found that there doesn’t seem to be a consistent definition of the entropy in thermodynamic gravity – applying quantum field theory results for the energy and entanglement entropy, we found that thermodynamic gravity could not simultaneously reproduce the correct constant in the Einstein equation and in the entropy/area relation for black holes.

So when all is said and done, we’ve found that holographic gravity, but not thermodynamic gravity, is on the right track. To answer our own question in the title of the paper, we found – in admittedly somewhat technical language – that the vacuum-subtracted von Neumann entropy evaluated on the null boundary of small causal diamonds gives a consistent formulation of holographic gravity. The future looks exciting for finding the connections between gravity and entanglement!

Guest Post: Grant Remmen on Entropic Gravity Read More »

10 Comments

Guest Post: Aidan Chatwin-Davies on Recovering One Qubit from a Black Hole

47858f217602be036c32e8ac76271a75_400x400 The question of how information escapes from evaporating black holes has puzzled physicists for almost forty years now, and while we’ve learned a lot we still don’t seem close to an answer. Increasingly, people who care about such things have been taking more seriously the intricacies of quantum information theory, and learning how to apply that general formalism to the specific issues of black hole information.

Now two students and I have offered a small contribution to this effort. Aidan Chatwin-Davies is a grad student here at Caltech, while Adam Jermyn was an undergraduate who has now gone on to do graduate work at Cambridge. Aidan came up with a simple method for getting out one “quantum bit” (qubit) of information from a black hole, using a strategy similar to “quantum teleportation.” Here’s our paper that just appeared on arxiv:

How to Recover a Qubit That Has Fallen Into a Black Hole
Aidan Chatwin-Davies, Adam S. Jermyn, Sean M. Carroll

We demonstrate an algorithm for the retrieval of a qubit, encoded in spin angular momentum, that has been dropped into a no-firewall unitary black hole. Retrieval is achieved analogously to quantum teleportation by collecting Hawking radiation and performing measurements on the black hole. Importantly, these methods only require the ability to perform measurements from outside the event horizon and to collect the Hawking radiation emitted after the state of interest is dropped into the black hole.

It’s a very specific — i.e. not very general — method: you have to have done measurements on the black hole ahead of time, and then drop in one qubit, and we show how to get it back out. Sadly it doesn’t work for two qubits (or more), so there’s no obvious way to generalize the procedure. But maybe the imagination of some clever person will be inspired by this particular thought experiment to come up with a way to get out two qubits, and we’ll be off.

I’m happy to host this guest post by Aidan, explaining the general method behind our madness.


If you were to ask someone on the bus which of Stephen Hawking’s contributions to physics he or she thought was most notable, the answer that you would almost certainly get is his prediction that a black hole should glow as if it were an object with some temperature. This glow is made up of thermal radiation which, unsurprisingly, we call Hawking radiation. As the black hole radiates, its mass slowly decreases and the black hole decreases in size. So, if you waited long enough and were careful not to enlarge the black hole by throwing stuff back in, then eventually it would completely evaporate away, leaving behind nothing but a bunch of Hawking radiation.

At a first glance, this phenomenon of black hole evaporation challenges a central notion in quantum theory, which is that it should not be possible to destroy information. Suppose, for example, that you were to toss a book, or a handful of atoms in a particular quantum state into the black hole. As the black hole evaporates into a collection of thermal Hawking particles, what happens to the information that was contained in that book or in the state of (what were formerly) your atoms? One possibility is that the information actually is destroyed, but then we would have to contend with some pretty ugly foundational consequences for quantum theory. Instead, it could be that the information is preserved in the state of the leftover Hawking radiation, albeit highly scrambled and difficult to distinguish from a thermal state. Besides being very pleasing on philosophical grounds, we also have evidence for the latter possibility from the AdS/CFT correspondence. Moreover, if the process of converting a black hole to Hawking radiation conserves information, then a stunning result of Hayden and Preskill says that for sufficiently old black holes, any information that you toss in comes back out almost a fast as possible!

Even so, exactly how information leaks out of a black hole and how one would go about converting a bunch of Hawking radiation to a useful state is quite mysterious. On that note, what we did in a recent piece of work was to propose a protocol whereby, under very modest and special circumstances, you can toss one qubit (a single unit of quantum information) into a black hole and then recover its state, and hence the information that it carried.

More precisely, the protocol describes how to recover a single qubit that is encoded in the spin angular momentum of a particle, i.e., a spin qubit. Spin is a property that any given particle possesses, just like mass or electric charge. For particles that have spin equal to 1/2 (like those that we consider in our protocol), at least classically, you can think of spin as a little arrow which points up or down and says whether the particle is spinning clockwise or counterclockwise about a line drawn through the arrow. In this classical picture, whether the arrow points up or down constitutes one classical bit of information. According to quantum mechanics, however, spin can actually exist in a superposition of being part up and part down; these proportions constitute one qubit of quantum information.

spin

So, how does one throw a spin qubit into a black hole and get it back out again? Suppose that Alice is sitting outside of a black hole, the properties of which she is monitoring. From the outside, a black hole is characterized by only three properties: its total mass, total charge, and total spin. This latter property is essentially just a much bigger version of the spin of an individual particle and will be important for the protocol.

Next, suppose that Alice accidentally drops a spin qubit into the black hole. First, she doesn’t panic. Instead, she patiently waits and collects one particle of Hawking radiation from the black hole. Crucially, when a Hawking particle is produced by the black hole, a bizarro version of the same particle is also produced, but just behind the black hole’s horizon (boundary) so that it falls into the black hole. This bizarro ingoing particle is the same as the outgoing Hawking particle, but with opposite properties. In particular, its spin state will always be flipped relative to the outgoing Hawking particle. (The outgoing Hawking particle and the ingoing particle are entangled, for those in the know.)

singlePic

The picture so far is that Alice, who is outside of the black hole, collects a single particle of Hawking radiation whilst the spin qubit that she dropped and the ingoing bizarro Hawking particle fall into the black hole. When the dropped particle and the bizarro particle fall into the black hole, their spins combine with the spin of the black hole—but remember! The bizarro particle’s spin was highly correlated with the spin of the outgoing Hawking particle. As such, the new combined total spin of the black hole becomes highly correlated with the spin of the outgoing Hawking particle, which Alice now holds. So, Alice measures the black hole’s new total spin state. Then, essentially, she can exploit the correlations between her held Hawking particle and the black hole to transfer the old spin state of the particle that she dropped into the hole to the Hawking particle that she now holds. Alice’s lost qubit is thus restored. Furthermore, Alice didn’t even need to know the precise state that her initial particle was in to begin with; the qubit is recovered regardless!

That’s the protocol in a nutshell. If the words “quantum teleportation” mean anything to you, then you can think of the protocol as a variation on the quantum teleportation protocol where the transmitting party is the black hole and measurement is performed in the total angular momentum basis instead of the Bell basis. Of course, this is far from a resolution of the information problem for black holes. However, it is certainly a neat trick which shows, in a special set of circumstances, how to “bounce” a qubit of quantum information off of a black hole.

Guest Post: Aidan Chatwin-Davies on Recovering One Qubit from a Black Hole Read More »

23 Comments

Guest Post: Don Page on God and Cosmology

Don Page is one of the world’s leading experts on theoretical gravitational physics and cosmology, as well as a previous guest-blogger around these parts. (There are more world experts in theoretical physics than there are people who have guest-blogged for me, so the latter category is arguably a greater honor.) He is also, somewhat unusually among cosmologists, an Evangelical Christian, and interested in the relationship between cosmology and religious belief.

Longtime readers may have noticed that I’m not very religious myself. But I’m always willing to engage with people with whom I disagree, if the conversation is substantive and proceeds in good faith. I may disagree with Don, but I’m always interested in what he has to say.

Recently Don watched the debate I had with William Lane Craig on “God and Cosmology.” I think these remarks from a devoted Christian who understands the cosmology very well will be of interest to people on either side of the debate.


Open letter to Sean Carroll and William Lane Craig:

I just ran across your debate at the 2014 Greer-Heard Forum, and greatly enjoyed listening to it. Since my own views are often a combination of one or the others of yours (though they also often differ from both of yours), I thought I would give some comments.

I tend to be skeptical of philosophical arguments for the existence of God, since I do not believe there are any that start with assumptions universally accepted. My own attempt at what I call the Optimal Argument for God (one, two, three, four), certainly makes assumptions that only a small fraction of people, and perhaps even only a small fraction of theists, believe in, such as my assumption that the world is the best possible. You know that well, Sean, from my provocative seminar at Caltech in November on “Cosmological Ontology and Epistemology” that included this argument at the end.

I mainly think philosophical arguments might be useful for motivating someone to think about theism in a new way and perhaps raise the prior probability someone might assign to theism. I do think that if one assigns theism not too low a prior probability, the historical evidence for the life, teachings, death, and resurrection of Jesus can lead to a posterior probability for theism (and for Jesus being the Son of God) being quite high. But if one thinks a priori that theism is extremely improbable, then the historical evidence for the Resurrection would be discounted and not lead to a high posterior probability for theism.

I tend to favor a Bayesian approach in which one assigns prior probabilities based on simplicity and then weights these by the likelihoods (the probabilities that different theories assign to our observations) to get, when the product is normalized by dividing by the sum of the products for all theories, the posterior probabilities for the theories. Of course, this is an idealized approach, since we don’t yet have _any_ plausible complete theory for the universe to calculate the conditional probability, given the theory, of any realistic observation.

For me, when I consider evidence from cosmology and physics, I find it remarkable that it seems consistent with all we know that the ultimate theory might be extremely simple and yet lead to sentient experiences such as ours. A Bayesian analysis with Occam’s razor to assign simpler theories higher prior probabilities would favor simpler theories, but the observations we do make preclude the simplest possible theories (such as the theory that nothing concrete exists, or the theory that all logically possible sentient experiences occur with equal probability, which would presumably make ours have zero probability in this theory if there are indeed an infinite number of logically possible sentient experiences). So it seems mysterious why the best theory of the universe (which we don’t have yet) may be extremely simple but yet not maximally simple. I don’t see that naturalism would explain this, though it could well accept it as a brute fact.

One might think that adding the hypothesis that the world (all that exists) includes God would make the theory for the entire world more complex, but it is not obvious that is the case, since it might be that God is even simpler than the universe, so that one would get a simpler explanation starting with God than starting with just the universe. But I agree with your point, Sean, that theism is not very well defined, since for a complete theory of a world that includes God, one would need to specify the nature of God.

For example, I have postulated that God loves mathematical elegance, as well as loving to create sentient beings, so something like this might explain both why the laws of physics, and the quantum state of the universe, and the rules for getting from those to the probabilities of observations, seem much simpler than they might have been, and why there are sentient experiences with a rather high degree of order. However, I admit there is a lot of logically possible variation on what God’s nature could be, so that it seems to me that at least we humans have to take that nature as a brute fact, analogous to the way naturalists would have to take the laws of physics and other aspects of the natural universe as brute facts. I don’t think either theism or naturalism solves this problem, so it seems to me rather a matter of faith which makes more progress toward solving it. That is, theism per se cannot deduce from purely a priori reasoning the full nature of God (e.g., when would He prefer to maintain elegant laws of physics, and when would He prefer to cure someone from cancer in a truly miraculous way that changes the laws of physics), and naturalism per se cannot deduce from purely a priori reasoning the full nature of the universe (e.g., what are the dynamical laws of physics, what are the boundary conditions, what are the rules for getting probabilities, etc.).

In view of these beliefs of mine, I am not convinced that most philosophical arguments for the existence of God are very persuasive. In particular, I am highly skeptical of the Kalam Cosmological Argument, which I shall quote here from one of your slides, Bill:

  1. If the universe began to exist, then there is a transcendent cause
    which brought the universe into existence.
  2. The universe began to exist.
  3. Therefore, there is a transcendent cause which brought the
    universe into existence.

I do not believe that the first premise is metaphysically necessary, and I am also not at all sure that our universe had a beginning. …

Guest Post: Don Page on God and Cosmology Read More »

960 Comments

Guest Post: An Interview with Jamie Bock of BICEP2

Jamie Bock If you’re reading this you probably know about the BICEP2 experiment, a radio telescope at the South Pole that measured a particular polarization signal known as “B-modes” in the cosmic microwaves background radiation. Cosmologists were very excited at the prospect that the B-modes were the imprint of gravitational waves originating from a period of inflation in the primordial universe; now, with more data from the Planck satellite, it seems plausible that the signal is mostly due to dust in our own galaxy. The measurements that the team reported were completely on-target, but our interpretation of them has changed — we’re still looking for direct evidence for or against inflation.

Here I’m very happy to publish an interview that was carried out with Jamie Bock, a professor of physics at Caltech and a senior research scientist at JPL, who is one of the leaders of the BICEP2 collaboration. It’s a unique look inside the workings of an incredibly challenging scientific effort.


New Results from BICEP2: An Interview with Jamie Bock

What does the new data from Planck tell you? What do you know now?

A scientific race has been under way for more than a decade among a dozen or so experiments trying to measure B-mode polarization, a telltale signature of gravitational waves produced from the time of inflation. Last March, BICEP2 reported a B-mode polarization signal, a twisty polarization pattern measured in a small patch of sky. The amplitude of the signal we measured was surprisingly large, exceeding what we expected for galactic emission. This implied we were seeing a large gravitational wave signal from inflation.

We ruled out galactic synchrotron emission, which comes from electrons spiraling in the magnetic field of the galaxy, using low-frequency data from the WMAP [Wilkinson Microwave Anisotropy Probe] satellite. But there were no data available on polarized galactic dust emission, and we had to use models. These models weren’t starting from zero; they were built on well-known maps of unpolarized dust emission, and, by and large, they predicted that polarized dust emission was a minor constituent of the total signal.

Obviously, the answer here is of great importance for cosmology, and we have always wanted a direct test of galactic emission using data in the same piece of sky so that we can test how much of the BICEP2 signal is cosmological, representing gravitational waves from inflation, and how much is from galactic dust. We did exactly that with galactic synchrotron emission from WMAP because the data were public. But with galactic dust emission, we were stuck, so we initiated a collaboration with the Planck satellite team to estimate and subtract polarized dust emission. Planck has the world’s best data on polarized emission from galactic dust, measured over the entire sky in multiple spectral bands. However, the polarized dust maps were only recently released.

On the other side, BICEP2 gives us the highest-sensitivity data available at 150 GHz to measure the CMB. Interestingly, the two measurements are stronger in combination. We get a big boost in sensitivity by putting them together. Also, the detectors for both projects were designed, built, and tested at Caltech and JPL, so I had a personal interest in seeing that these projects worked together. I’m glad to say the teams worked efficiently and harmoniously together.

What we found is that when we subtract the galaxy, we just see noise; no signal from the CMB is detectable. Formally we can say at least 40 percent of the total BICEP2 signal is dust and less than 60 percent is from inflation.

How do these new data shape your next steps in exploring the earliest moments of the universe?

It is the best we can do right now, but unfortunately the result with Planck is not a very strong test of a possible gravitational wave signal. This is because the process of subtracting galactic emission effectively adds more noise into the analysis, and that noise limits our conclusions. While the inflationary signal is less than 60 percent of the total, that is not terribly informative, leaving many open questions. For example, it is quite possible that the noise prevents us from seeing part of the signal that is cosmological. It is also possible that all of the BICEP2 signal comes from the galaxy. Unfortunately, we cannot say more because the data are simply not precise enough. Our ability to measure polarized galactic dust emission in particular is frustratingly limited.

Figure 1:  Maps of CMB polarization produced by BICEP2 and Keck Array.  The maps show the  ‘E-mode’ polarization pattern, a signal from density variations in the CMB, not gravitational  waves.  The polarization is given by the length and direction of the lines, with a coloring to better  show the sign and amplitude of the E-mode signal.  The tapering toward the edges of the map is  a result of how the instruments observed this region of sky.  While the E-mode pattern is about 6  times brighter than the B-mode signal, it is still quite faint.  Tiny variations of only 1 millionth of  a degree kelvin are faithfully reproduced across these multiple measurements at 150 GHz, and in  new Keck data at 95 GHz still under analysis.  The very slight color shift visible between 150  and 95 GHz is due to the change in the beam size.
Figure 1: Maps of CMB polarization produced by BICEP2 and Keck Array.  The maps show the
‘E-mode’ polarization pattern, a signal from density variations in the CMB, not gravitational
waves.  The polarization is given by the length and direction of the lines, with a coloring to better
show the sign and amplitude of the E-mode signal.  The tapering toward the edges of the map is
a result of how the instruments observed this region of sky.  While the E-mode pattern is about 6
times brighter than the B-mode signal, it is still quite faint.  Tiny variations of only 1 millionth of
a degree kelvin are faithfully reproduced across these multiple measurements at 150 GHz, and in
new Keck data at 95 GHz still under analysis. The very slight color shift visible between 150
and 95 GHz is due to the change in the beam size.

However, there is good news to report. In this analysis, we added new data obtained in 2012–13 from the Keck Array, an instrument with five telescopes and the successor to BICEP2 (see Fig. 1). These data are at the same frequency band as BICEP2—150 GHz—so while they don’t help subtract the galaxy, they do increase the total sensitivity. The Keck Array clearly detects the same signal detected by BICEP2. In fact, every test we can do shows the two are quite consistent, which demonstrates that we are doing these difficult measurements correctly (see Fig. 2). The BICEP2/Keck maps are also the best ever made, with enough sensitivity to detect signals that are a tiny fraction of the total.

A power spectrum of the B-mode polarization signal that plots the strength of the signal as a function of angular frequency.  The data show a signal significantly above what is expected for a universe without gravitational waves, given by the red line.  The excess peaks at angular scales of about 2 degrees.  The independent measurements of BICEP2 and Keck Array shown in red and blue are consistent within the errors, and their combination is shown in black.  Note the sets of points are slightly shifted along the x-axis to avoid overlaps.
Figure 2: A power spectrum of the B-mode polarization signal that plots the strength of the signal as a function of angular frequency. The data show a signal significantly above what is expected for a universe without gravitational waves, given by the red line. The excess peaks at angular scales of about 2 degrees. The independent measurements of BICEP2 and Keck Array shown in red and blue are consistent within the errors, and their combination is shown in black. Note the sets of points are slightly shifted along the x-axis to avoid overlaps.

In addition, Planck’s measurements over the whole sky show the polarized dust is fairly well behaved. For example, the polarized dust has nearly the same spectrum across the sky, so there is every reason to expect we can measure and remove dust cleanly.

To better subtract the galaxy, we need better data. We aren’t going to get more data from Planck because the mission has finished. The best way is to measure the dust ourselves by adding new spectral bands to our own instruments. We are well along in this process already. We added a second band to the Keck Array last year at 95 GHz and a third band this year at 220 GHz. We just installed the new BICEP3 instrument at 95 GHz at the South Pole (see Fig. 3). BICEP3 is single telescope that will soon be as powerful as all five Keck Array telescopes put together. At 95 GHz, Keck and BICEP3 should surpass BICEP2’s 150 GHz sensitivity by the end of this year, and the two will be a very powerful combination indeed. If we switch the Keck Array entirely over to 220 GHz starting next year, we can get a third band to a similar depth.

BICEP3 installed and carrying out calibration measurements off a reflective mirror placed above the receiver. The instrument is housed within a conical reflective ground shield to minimize the brightness contrast between the warm earth and cold space.  This picture was taken at the beginning of the winter season, with no physical access to the station for the next 8 months, when BICEP3 will conduct astronomical observations (Credit:  Sam Harrison
Figure 3: BICEP3 installed and carrying out calibration measurements off a reflective mirror placed above the receiver. The instrument is housed within a conical reflective ground shield to minimize the brightness contrast between the warm earth and cold space. This picture was taken at the beginning of the winter season, with no physical access to the station for the next 8 months, when BICEP3 will conduct astronomical observations (Credit: Sam Harrison)

Finally, this January the SPIDER balloon experiment, which is also searching the CMB for evidence of inflation, completed its first flight, outfitted with comparable sensitivity at 95 and 150 GHz. Because SPIDER floats above the atmosphere (see Fig. 4), we can also measure the sky on larger spatial scales. This all adds up to make the coming years very exciting.

View of the earth and the edge of space, taken from an optical camera on the SPIDER gondola at float altitude shortly after launch. Clearly visible below is Ross Island, with volcanos Mt. Erebus and Mt. Terror and the McMurdo Antarctic base, the Royal Society mountain range to the left, and the edge of the Ross permanent ice shelf.   (Credit:  SPIDER team).
Figure 4: View of the earth and the edge of space, taken from an optical camera on the SPIDER gondola at float altitude shortly after launch. Clearly visible below is Ross Island, with volcanos Mt. Erebus and Mt. Terror and the McMurdo Antarctic base, the Royal Society mountain range to the left, and the edge of the Ross permanent ice shelf. (Credit: SPIDER team).

Why did you make the decision last March to release results? In retrospect, do you regret it?

We knew at the time that any news of a B-mode signal would cause a great stir. We started working on the BICEP2 data in 2010, and our standard for putting out the paper was that we were certain the measurements themselves were correct. It is important to point out that, throughout this episode, our measurements basically have not changed. As I said earlier, the initial BICEP2 measurement agrees with new data from the Keck Array, and both show the same signal. For all we know, the B-mode polarization signal measured by BICEP2 may contain a significant cosmological component—that’s what we need to find out.

The question really is, should we have waited until better data were available on galactic dust? Personally, I think we did the right thing. The field needed to be able to react to our data and test the results independently, as we did in our collaboration with Planck. This process hasn’t ended; it will continue with new data. Also, the searches for inflationary gravitational waves are influenced by these findings, and it is clear that all of the experiments in the field need to focus more resources on measuring the galaxy.

How confident are you that you will ultimately find conclusive evidence for primordial gravitational waves and the signature of cosmic inflation?

I don’t have an opinion about whether or not we will find a gravitational wave signal—that is why we are doing the measurement! But any result is so significant for cosmology that it has to be thoroughly tested by multiple groups. I am confident that the measurements we have made to date are robust, and the new data we need to subtract the galaxy more accurately are starting to pour forth. The immediate path forward is clear: we know how to make these measurements at 150 GHz, and we are already applying the same process to to the new frequencies. Doing the measurements ourselves also means they are uniform so we understand all of the errors, which, in the end, are just as important.

What will it mean for our understanding of the universe if you don’t find the signal?

The goal of this program is to learn how inflation happened. Inflation requires matter-energy with an unusual repulsive property in order to rapidly expand the universe. The physics are almost certainly new and exotic, at energies too high to be accessed with terrestrial particle accelerators. CMB measurements are one of the few ways to get at the inflationary physics, and we need to squeeze them for all they are worth. A gravitational wave signal is very interesting because it tells us about the physical process behind inflation. A detection of the polarization signal at a high level means that the certain models of inflation, perhaps along the lines of the models first developed, are a good explanation.

But here again is the real point: we also learn more about inflation if we can rule out polarization from gravitational waves. No detection at 5 percent or less of the total BICEP2 signal means that inflation is likely more complicated, perhaps involving multiple fields, although there are certainly other possibilities. Either way is a win, and we’ll find out more about what caused the birth of the universe 13.8 billion years ago.

Our team dedicated itself to the pursuit of inflationary polarization 15 years ago fully expecting a long and difficult journey. It is exciting, after all this work, to be at this stage where the polarization data are breaking into new ground, providing more information about gravitational waves than we learned before. The BICEP2 signal was a surprise, and its ultimate resolution is still a work in progress. The data we need to address these questions about inflation are within sight, and whatever the answers are, they are going to be interesting, so stay tuned.

Guest Post: An Interview with Jamie Bock of BICEP2 Read More »

10 Comments

Guest Post: Chip Sebens on the Many-Interacting-Worlds Approach to Quantum Mechanics

Chip Sebens I got to know Charles “Chip” Sebens back in 2012, when he emailed to ask if he could spend the summer at Caltech. Chip is a graduate student in the philosophy department at the University of Michigan, and like many philosophers of physics, knows the technical background behind relativity and quantum mechanics very well. Chip had funding from NSF, and I like talking to philosophers, so I said why not?

We had an extremely productive summer, focusing on our different stances toward quantum mechanics. At the time I was a casual adherent of the Everett (many-worlds) formulation, but had never thought about it carefully. Chip was skeptical, in particular because he thought there were good reasons to believe that EQM should predict equal probabilities for being on any branch of the wave function, rather than the amplitude-squared probabilities of the real-world Born Rule. Fortunately, I won, although the reason I won was mostly because Chip figured out what was going on. We ended up writing a paper explaining why the Born Rule naturally emerges from EQM under some simple assumptions. Now I have graduated from being a casual adherent to a slightly more serious one.

But that doesn’t mean Everett is right, and it’s worth looking at other formulations. Chip was good enough to accept my request that he write a guest blog post about another approach that’s been in the news lately: a “Newtonian” or “Many-Interacting-Worlds” formulation of quantum mechanics, which he has helped to pioneer.


In Newtonian physics objects always have definite locations. They are never in two places at once. To determine how an object will move one simply needs to add up the various forces acting on it and from these calculate the object’s acceleration. This framework is generally taken to be inadequate for explaining the quantum behavior of subatomic particles like electrons and protons. We are told that quantum theory requires us to revise this classical picture of the world, but what picture of reality is supposed to take its place is unclear. There is little consensus on many foundational questions: Is quantum randomness fundamental or a result of our ignorance? Do electrons have well-defined properties before measurement? Is the Schrödinger equation always obeyed? Are there parallel universes?

Some of us feel that the theory is understood well enough to be getting on with. Even though we might not know what electrons are up to when no one is looking, we know how to apply the theory to make predictions for the results of experiments. Much progress has been made―observe the wonder of the standard model―without answering these foundational questions. Perhaps one day with insight gained from new physics we can return to these basic questions. I will call those with such a mindset the doers. Richard Feynman was a doer:

“It will be difficult. But the difficulty really is psychological and exists in the perpetual torment that results from your saying to yourself, ‘But how can it be like that?’ which is a reflection of uncontrolled but utterly vain desire to see it in terms of something familiar. I will not describe it in terms of an analogy with something familiar; I will simply describe it. … I think I can safely say that nobody understands quantum mechanics. … Do not keep saying to yourself, if you can possibly avoid it, ‘But how can it be like that?’ because you will get ‘down the drain’, into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that.”

-Feynman, The Character of Physical Law (chapter 6, pg. 129)

In contrast to the doers, there are the dreamers. Dreamers, although they may often use the theory without worrying about its foundations, are unsatisfied with standard presentations of quantum mechanics. They want to know “how it can be like that” and have offered a variety of alternative ways of filling in the details. Doers denigrate the dreamers for being unproductive, getting lost “down the drain.” Dreamers criticize the doers for giving up on one of the central goals of physics, understanding nature, to focus exclusively on another, controlling it. But even by the lights of the doer’s primary mission―being able to make accurate predictions for a wide variety of experiments―there are reasons to dream:

“Suppose you have two theories, A and B, which look completely different psychologically, with different ideas in them and so on, but that all consequences that are computed from each are exactly the same, and both agree with experiment. … how are we going to decide which one is right? There is no way by science, because they both agree with experiment to the same extent. … However, for psychological reasons, in order to guess new theories, these two things may be very far from equivalent, because one gives a man different ideas from the other. By putting the theory in a certain kind of framework you get an idea of what to change. … Therefore psychologically we must keep all the theories in our heads, and every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics.”

-Feynman, The Character of Physical Law (chapter 7, pg. 168)

In the spirit of finding alternative versions of quantum mechanics―whether they agree exactly or only approximately on experimental consequences―let me describe an exciting new option which has recently been proposed by Hall, Deckert, and Wiseman (in Physical Review X) and myself (forthcoming in Philosophy of Science), receiving media attention in: Nature, New Scientist, Cosmos, Huffington Post, Huffington Post Blog, FQXi podcast… Somewhat similar ideas have been put forward by Böstrom, Schiff and Poirier, and Tipler. The new approach seeks to take seriously quantum theory’s hydrodynamic formulation which was developed by Erwin Madelung in the 1920s. Although the proposal is distinct from the many-worlds interpretation, it also involves the postulation of parallel universes. The proposed multiverse picture is not the quantum mechanics of college textbooks, but just because the theory looks so “completely different psychologically” it might aid the development of new physics or new calculational techniques (even if this radical picture of reality ultimately turns out to be incorrect).

Let’s begin with an entirely reasonable question a dreamer might ask about quantum mechanics.

“I understand water waves and sound waves. These waves are made of particles. A sound wave is a compression wave that results from particles of air bunching up in certain regions and vacating other. Waves play a central role in quantum mechanics. Is it possible to understand these waves as being made of some things?”

There are a variety of reasons to think the answer is no, but they can be overcome. In quantum mechanics, the state of a system is described by a wave function Ψ. Consider a single particle in the famous double-slit experiment. In this experiment the one particle initially passes through both slits (in its quantum way) and then at the end is observed hitting somewhere on a screen. The state of the particle is described by a wave function which assigns a complex number to each point in space at each time. The wave function is initially centered on the two slits. Then, as the particle approaches the detection screen, an interference pattern emerges; the particle behaves like a wave.

Figure 1: The evolution of Ψ with the amount of color proportional to the amplitude (a.k.a. magnitude) and the hue indicating the phase of Ψ.
Figure 1: The evolution of Ψ with the amount of color proportional to the amplitude (a.k.a. magnitude) and the hue indicating the phase of Ψ.

There’s a problem with thinking of the wave as made of something: the wave function assigns strange complex numbers to points in space instead of familiar real numbers. This can be resolved by focusing on |Ψ|2, the squared amplitude of the wave function, which is always a positive real number.

Figure 2: The evolution of |Ψ|2.
Figure 2: The evolution of |Ψ|2.

We normally think of |Ψ|2 as giving the probability of finding the particle somewhere. But, to entertain the dreamer’s idea about quantum waves, let’s instead think of |Ψ|2 as giving a density of particles. Whereas figure 2 is normally interpreted as showing the evolution of the probability distribution for a single particle, instead understand it as showing the distribution of a large number of particles: initially bunched up at the two slits and later spread out in bands at the detector (figure 3). Although I won’t go into the details here, we can actually understand the way that wave changes in time as resulting from interactions between these particles, from the particles pushing each other around. The Schrödinger equation, which is normally used to describe the way the wave function changes, is then viewed as consequence of this interaction.

Figure 3: The evolution of particles with |Ψ|2 as the density. This animation is meant to help visualize the idea, but don’t take the precise motions of the particles too seriously. Although we know how the particles move en masse, we don’t know precisely how individual particles move.
Figure 3: The evolution of particles with |Ψ|2 as the density. This animation is meant to help visualize the idea, but don’t take the precise motions of the particles too seriously. Although we know how the particles move en masse, we don’t know precisely how individual particles move.

In solving the problem about complex numbers, we’ve created two new problems: How can there really be a large number of particles if we only ever see one show up on the detector at the end? If |Ψ|2 is now telling us about densities and not probabilities, what does it have to do with probabilities?

Removing a simplification in the standard story will help. Instead of focusing on the wave function of a single particle, let’s consider all particles at once. To describe the state of a collection of particles it turns out we can’t just give each particle its own wave function. This would miss out on an important feature of quantum mechanics: entanglement. The state of one particle may be inextricably linked to the state of another. Instead of having a wave function for each particle, a single universal wave function describes the collection of particles.

The universal wave function takes as input a position for each particle as well as the time. The position of a single particle is given by a point in familiar three dimensional space. The positions of all particles can be given by a single point in a very high dimensional space, configuration space: the first three dimensions of configuration space give the position of particle 1, the next three give the position of particle 2, etc. The universal wave function Ψ assigns a complex number to each point of configuration space at each time.  |Ψ|2 then assigns a positive real number to each point of configuration space (at each time). Can we understand this as a density of some things?

A single point in configuration space specifies the locations of all particles, a way all things might be arranged, a way the world might be. If there is only one world, then only one point in configuration space is special: it accurately captures where all the particles are. If there are many worlds, then many points in configuration space are special: each accurately captures where the particles are in some world. We could describe how densely packed these special points are, which regions of configuration space contain many worlds and which regions contain few. We can understand |Ψ|2 as giving the density of worlds in configuration space. This might seem radical, but it is the natural extension of the answer to the dreamer’s question depicted in figure 3.

Now that we have moved to a theory with many worlds, the first problem above can be answered: The reason that we only see one particle hit the detector in the double-slit experiment is that only one of the particles in figure 3 is in our world. When the particles hit the detection screen at the end we only see our own. The rest of the particles, though not part of our world, do interact with ours. They are responsible for the swerves in our particle’s trajectory. (Because of this feature, Hall, Deckert, and Wiseman have coined the name “Many Interacting Worlds” for the approach.)

Figure 4: The evolution of particles in figure 3 with the particle that lives in our world highlighted.
Figure 4: The evolution of particles in figure 3 with the particle that lives in our world highlighted.

No matter how knowledgeable and observant you are, you cannot know precisely where every single particle in the universe is located. Put another way, you don’t know where our world is located in configuration space. Since the regions of configuration space where |Ψ|2 is large have more worlds in them and more people like you wondering which world they’re in, you should expect to be in a region of configuration space where|Ψ|2 is large. (Aside: this strategy of counting each copy of oneself as equally likely is not so plausible in the old many-worlds interpretation.) Thus the connection between |Ψ|2 and probability is not a fundamental postulate of the theory, but a result of proper reasoning given this picture of reality.

There is of course much more to the story than what’s been said here. One particularly intriguing consequence of the new approach is that the three sentence characterization of Newtonian physics with which this post began is met. In that sense, this theory makes quantum mechanics look like classical physics. For this reason, in my paper I gave the theory the name “Newtonian Quantum Mechanics.”

Guest Post: Chip Sebens on the Many-Interacting-Worlds Approach to Quantum Mechanics Read More »

55 Comments

Guest Post by Alessandra Buonanno: Nobel Laureates Call for Release of Iranian Student Omid Kokabee

buonannoUsually I start guest posts by remarking on what a pleasure it is to host an article on the topic being discussed. Unfortunately this is a sadder occasion: protesting the unfair detention of Omid Kokabee, a physics graduate student at the University of Texas, who is being imprisoned by the government of Iran. Alessandra Buonanno, who wrote the post, is a distinguished gravitational theorist at the Max Planck Institute for Gravitational Physics and the University of Maryland, as well as a member of the Committee on International Freedom of Scientists of the American Physical Society. This case should be important to everyone, but it’s especially important for physicists to work to protect the rights of students who travel from abroad to study our subject.


Omid Kokabee was arrested at the airport of Teheran in January 2011, just before taking a flight back to the University of Texas at Austin, after spending the winter break with his family. He was accused of communicating with a hostile government and after a trial, in which he was denied contact with a lawyer, he was sentenced to 10 years in Teheran’s Evin prison.

According to a letter written by Omid Kokabee, he was asked to work on classified research, and his arrest and detention was a consequence of his refusal. Since his detention, Kokabee has continued to assert his innocence, claiming that several human rights violations affected his interrogation and trial.

Since 2011, we, the Committee on International Freedom of Scientists (CIFS) of the American Physical Society, have protested the imprisonment of Omid Kokabee. Although this case has received continuous support from several scientific and international human rights organizations, the government of Iran has refused to release Kokabee.

Omid Kokabee

Omid Kokabee has received two prestigious awards:

  • The American Physical Society awarded him Andrei Sakharov Prize “For his courage in refusing to use his physics knowledge to work on projects that he deemed harmful to humanity, in the face of extreme physical and psychological pressure.”
  • The American Association for the Advancement of Science awarded Kokabee the Scientific Freedom and Responsibility Prize.

Amnesty International (AI) considers Kokabee a prisoner of conscience and has requested his immediate release.

Recently, the Committee of Concerned Scientists (CCS), AI and CIFS, have prepared a letter addressed to the Iranian Supreme Leader Ali Khamenei asking that Omid Kokabee be released immediately. The letter was signed by 31 Nobel-prize laureates. (An additional 13 Nobel Laureates have signed this letter since the Nature blog post. See also this update from APS.)

Unfortunately, earlier last month, Kokabee’s health conditions have deteriorated and he has been denied proper medical care. In response, the President of APS, Malcolm Beasley, has written a letter to the Iranian President Rouhani calling for a medical furlough for Omid Kokabee so that he can receive proper medical treatment. AI has also made further steps and has requested urgent medical care for Kokabee.

Very recently, the Iran’s supreme court has nullified the original conviction of Omid Kokabee and has agreed to reconsider the case. Although this is positive news, it is not clear when the new trial will start. Considering Kokabee’s health conditions, it is very important that he is granted a medical furlough as soon as possible.

More public engagement and awareness is needed to solve this unacceptable case of violation of human rights and freedom of scientific research. You can help by tweeting/blogging about it and responding to this Urgent Action that AI has issued. Please note that the date on the Urgent Action is there to create an avalanche effect; it is not a deadline nor it is the end of action.

Alessandra Buonanno for the American Physical Society’s Committee on International Freedom of Scientists (CIFS).

Guest Post by Alessandra Buonanno: Nobel Laureates Call for Release of Iranian Student Omid Kokabee Read More »

20 Comments

Guest Post: Max Tegmark on Cosmic Inflation

Max TegmarkMost readers will doubtless be familiar with Max Tegmark, the MIT cosmologist who successfully balances down-and-dirty data analysis of large-scale structure and the microwave background with more speculative big-picture ideas about quantum mechanics and the nature of reality. Max has a new book out — Our Mathematical Universe: My Quest for the Ultimate Nature of Reality — in which he takes the reader on a journey from atoms and the solar system to a many-layered multiverse.

In the wake of the recent results indicating gravitational waves in the cosmic microwave background, here Max delves into the idea of inflation — what it really does, and what some of the implications are.


Thanks to the relentless efforts of the BICEP2 team during balmy -100F half-year-long nights at the South Pole, inflation has for the first time become not only something economists worry about, but also a theory for our cosmic origins that’s really hard to dismiss. As Sean has reported here on this blog, the implications are huge. Of course we need independent confirmation of the BICEP2 results before uncorking the champagne, but in the mean time, we’re forced to take quite seriously that everything in our observable universe was once smaller than a billionth the size of a proton, containing less mass than an apple, and doubled its size at least 80 times, once every hundredth of a trillionth of a trillionth of a trillionth of a second, until it was more massive than our entire observable universe.

We still don’t know what, if anything, came before inflation, but this is nonetheless a huge step forward in understanding our cosmic origins. Without inflation, we had to explain why there were over a million trillion trillion trillion trillion kilograms of stuff in existence, carefully arranged to be almost perfectly uniform while flying apart at huge speeds that were fine-tuned to 24 decimal places. The traditional answer in the textbooks was that we had no clue why things started out this way, and should simply assume it. Inflation puts the “bang” into our Big Bang by providing a physical mechanism for creating all those kilograms and even explains why they were expanding in such a special way. The amount of mass needed to get inflation started is less than that in an apple, so even though inflation doesn’t explain the origin of everything, there’s a lot less stuff left to explain the origin of.

If we take inflation seriously, then we need to stop saying that inflation happened shortly after our Big Bang, because it happened before it, creating it. It is inappropriate to define our Hot Big Bang as the beginning of time, because we don’t know whether time actually had a beginning, and because the early stages of inflation were neither strikingly hot nor big nor much of a bang. As that tiny speck of inflating substance doubled its diameter 80 times, the velocities with which its parts were flying away from one another increased by the same factor 2^80. Its volume increased by that factor cubed, i.e., 2^240, and so did its mass, since its density remained approximately constant. The temperature of any particles left over from before inflation soon dropped to near zero, with the only remaining heat coming from same Hawking/Unruh quantum fluctuations that generated the gravitational waves.

Taken together, this in my opinion means that the early stages of inflation are better thought of not as a Hot Big Bang but as a Cold Little Swoosh, because at that time our universe was not that hot (getting a thousand times hotter once inflation ended), not that big (less massive than an apple and less than a billionth of the size of a proton) and not much of a bang (with expansion velocities a trillion trillion times slower than after inflation). In other words, a Hot Big Bang did not precede and cause inflation. Instead, a Cold Little Swoosh preceded and caused our Hot Big Bang.

Since the BICEP2 breakthrough is generating such huge interest in inflation, I’ve decided to post my entire book chapter on inflation here so that you can get an up-to-date and self-contained account of what it’s all about. Here are some of the questions answered:

  • What does the theory of inflation really predict?
  • What physics does it assume?
  • Doesn’t creation of the matter around us from almost nothing violate energy conservation?
  • How could an infinite space get created in a finite time?
  • How is this linked to the BICEP2 signal?
  • What remarkable prize did Alan Guth win in 2005?

Guest Post: Max Tegmark on Cosmic Inflation Read More »

51 Comments
Scroll to Top