Joe Polchinski’s Memories, and a Mark Wise Movie

Joe Polchinski, a universally-admired theoretical physicist at the Kavli Institute for Theoretical Physics in Santa Barbara, recently posted a 150-page writeup of his memories of doing research over the years.

Memories of a Theoretical Physicist
Joseph Polchinski

While I was dealing with a brain injury and finding it difficult to work, two friends (Derek Westen, a friend of the KITP, and Steve Shenker, with whom I was recently collaborating), suggested that a new direction might be good. Steve in particular regarded me as a good writer and suggested that I try that. I quickly took to Steve’s suggestion. Having only two bodies of knowledge, myself and physics, I decided to write an autobiography about my development as a theoretical physicist. This is not written for any particular audience, but just to give myself a goal. It will probably have too much physics for a nontechnical reader, and too little for a physicist, but perhaps there with be different things for each. Parts may be tedious. But it is somewhat unique, I think, a blow-by-blow history of where I started and where I got to. Probably the target audience is theoretical physicists, especially young ones, who may enjoy comparing my struggles with their own. Some disclaimers: This is based on my own memories, jogged by the arXiv and Inspire. There will surely be errors and omissions. And note the title: this is about my memories, which will be different for other people. Also, it would not be possible for me to mention all the authors whose work might intersect mine, so this should not be treated as a reference work.

As the piece explains, it’s a bittersweet project, as it was brought about by Joe struggling with a serious illness and finding it difficult to do physics. We all hope he fully recovers and gets back to leading the field in creative directions.

I had the pleasure of spending three years down the hall from Joe when I was a postdoc at the ITP (it didn’t have the “K” at that time). You’ll see my name pop up briefly in his article, sadly in the context of an amusing anecdote rather than an exciting piece of research, since I stupidly spent three years in Santa Barbara without collaborating with any of the brilliant minds on the faculty there. Not sure exactly what I was thinking.

Joe is of course a world-leading theoretical physicist, and his memories give you an idea why, while at the same time being very honest about setbacks and frustrations. His style has never been to jump on a topic while it was hot, but to think deeply about fundamental issues and look for connections others have missed. This approach led him to such breakthroughs as a new understanding of the renormalization group, the discovery of D-branes in string theory, and the possibility of firewalls in black holes. It’s not necessarily a method that would work for everyone, especially because it doesn’t necessarily lead to a lot of papers being written at a young age. (Others who somehow made this style work for them, and somehow survived, include Ken Wilson and Alan Guth.) But the purity and integrity of Joe’s approach to doing science is an example for all of us.

Somehow over the course of 150 pages Joe neglected to mention perhaps his greatest triumph, as a three-time guest blogger (one, two, three). Too modest, I imagine.

His memories make for truly compelling reading, at least for physicists — he’s an excellent stylist and pedagogue, but the intended audience is people who have already heard about the renormalization group. This kind of thoughtful but informal recollection is an invaluable resource, as you get to see not only the polished final product of a physics paper, but the twists and turns of how it came to be, especially the motivations underlying why the scientist chose to think about things one way rather than some other way.

(Idea: there is a wonderful online magazine called The Players’ Tribune, which gives athletes an opportunity to write articles expressing their views and experiences, e.g. the raw feelings after you are traded. It would be great to have something like that for scientists, or for academics more broadly, to write about the experiences [good and bad] of doing research. Young people in the field would find it invaluable, and non-scientists could learn a lot about how science really works.)

You also get to read about many of the interesting friends and colleagues of Joe’s over the years. A prominent one is my current Caltech colleague Mark Wise, a leading physicist in his own right (and someone I was smart enough to collaborate with — with age comes wisdom, or at least more wisdom than you used to have). Joe and Mark got to know each other as postdocs, and have remained friends ever since. When it came time for a scientific gathering to celebrate Joe’s 60th birthday, Mark contributed a home-made movie showing (in inimitable style) how much progress he had made over the years in the activities they had enjoyed together in their relative youth. And now, for the first time, that movie is available to the whole public. It’s seven minutes long, but don’t make the mistake of skipping the blooper reel that accompanies the end credits. Many thanks to Kim Boddy, the former Caltech student who directed and produced this lost masterpiece.

When it came time for his own 60th, Mark being Mark he didn’t want the usual conference, and decided instead to gather physicist friends from over the years and take them to a local ice rink for a bout of curling. (Canadian heritage showing through.) Joe being Joe, this was an invitation he couldn’t resist, and we had a grand old time, free of any truly serious injuries.

We don’t often say it out loud, but one of the special privileges of being in this field is getting to know brilliant and wonderful people, and interacting with them over periods of many years. I owe Joe a lot — even if I wasn’t smart enough to collaborate with him when he was down the hall, I learned an enormous amount from his example, and often wonder how he would think about this or that issue in physics.

 

A Response to “On the time lags of the LIGO signals” (Guest Post)

This is a special guest post by Ian Harry, postdoctoral physicist at the Max Planck Institute for Gravitational Physics, Potsdam-Golm. You may have seen stories about a paper that recently appeared, which called into question whether the LIGO gravitational-wave observatory had actually detected signals from inspiralling black holes, as they had claimed. Ian’s post is an informal response to these claims, on behalf of the LIGO Scientific Collaboration. He argues that there are data-analysis issues that render the new paper, by James Creswell et al., incorrect. Happily, there are sufficient online tools that this is a question that interested parties can investigate for themselves. Here’s Ian:


On 13 Jun 2017 a paper appeared on the arXiv titled “On the time lags of the LIGO signals” by Creswell et al. This paper calls into question the 5-sigma detection claim of GW150914 and following detections. In this short response I will refute these claims.

Who am I? I am a member of the LIGO collaboration. I work on the analysis of LIGO data, and for 10 years have been developing searches for compact binary mergers. The conclusions I draw here have been checked by a number of colleagues within the LIGO and Virgo collaborations. We are also in touch with the authors of the article to raise these concerns directly, and plan to write a more formal short paper for submission to the arXiv explaining in more detail the issues I mention below. In the interest of timeliness, and in response to numerous requests from outside of the collaboration, I am sharing these notes in the hope that they will clarify the situation.

In this article I will go into some detail to try to refute the claims of Creswell et al. Let me start though by trying to give a brief overview. In Creswell et al. the authors take LIGO data made available through the LIGO Open Science Data from the Hanford and Livingston observatories and perform a simple Fourier analysis on that data. They find the noise to be correlated as a function of frequency. They also perform a time-domain analysis and claim that there are correlations between the noise in the two observatories, which is present after removing the GW150914 signal from the data. These results are used to cast doubt on the reliability of the GW150914 observation. There are a number of reasons why this conclusion is incorrect: 1. The frequency-domain correlations they are seeing arise from the way they do their FFT on the filtered data. We have managed to demonstrate the same effect with simulated Gaussian noise. 2. LIGO analyses use whitened data when searching for compact binary mergers such as GW150914. When repeating the analysis of Creswell et al. on whitened data these effects are completely absent. 3. Our 5-sigma significance comes from a procedure of repeatedly time-shifting the data, which is not invalidated if correlations of the type described in Creswell et al. are present.

Section II: The curious case of the Fourier phase correlations?

The main result (in my opinion) from section II of Creswell et al. is Figure 3, which shows that, when one takes the Fourier transform of the LIGO data containing GW150914, and plots the Fourier phases as a function of frequency, one can see a clear correlation (ie. all the points line up, especially for the Hanford data). I was able to reproduce this with the LIGO Open Science Center data and a small ipython notebook. I make the ipython notebook available so that the reader can see this, and some additional plots, and reproduce this.

For Gaussian noise we would expect the Fourier phases to be distributed randomly (between -pi and pi). Clearly in the plot shown above, and in Creswell et al., this is not the case. However, the authors overlooked one critical detail here. When you take a Fourier transform of a time series you are implicitly assuming that the data are cyclical (i.e. that the first point is adjacent to the last point). For colored Gaussian noise this assumption will lead to a discontinuity in the data at the two end points, because these data are not causally connected. This discontinuity can be responsible for misleading plots like the one above.

To try to demonstrate this I perform two tests. First I whiten the colored LIGO noise by measuring the power spectral density (see the LOSC example, which I use directly in my ipython notebook, for some background on colored noise and noise power spectral density), then dividing the data in the Fourier domain by the power spectral density, and finally converting back to the time domain. This process will corrupt some data at the edges so after whitening we only consider the middle half of the data. Then we can make the same plot:

And we can see that there are now no correlations visible in the data. For white Gaussian noise there is no correlation between adjacent points, so no discontinuity is introduced when treating the data as cyclical. I therefore assert that Figure 3 of Creswell et al. actually has no meaning when generated using anything other than whitened data.

I would also like to mention that measuring the noise power spectral density of LIGO data can be challenging when the data are non-stationary and include spectral lines (as Creswell et al. point out). Therefore it can be difficult to whiten data in many circumstances. For the Livingston data some of the spectral lines are still clearly present after whitening (using the methods described in the LOSC example), and then mild correlations are present in the resulting plot (see ipython notebook). This is not indicative of any type of non-Gaussianity, but demonstrates that measuring the noise power-spectral density of LIGO noise is difficult, and, especially for parameter inference, a lot of work has been spent on answering this question.

To further illustrate that features like those seen in Figure 3 of Creswell et al. can be seen in known Gaussian noise I perform an additional check (suggested by my colleague Vivien Raymond). I generate a 128 second stretch of white Gaussian noise (using numpy.random.normal) and invert the whitening procedure employed on the LIGO data above to produce 128 seconds of colored Gaussian noise. Now the data, previously random, are ‘colored’ Coloring the data in the manner I did makes the full data set cyclical (the last point is correlated with the first) so taking the Fourier transform of the complete data set, I see the expected random distribution of phases (again, see the ipython notebook). However, If I select 32s from the middle of this data, introducing a discontinuity as I mention above, I can produce the following plot:

In other words, I can produce an even more extremely correlated example than on the real data, with actual Gaussian noise.

Section III: The data is strongly correlated even after removing the signal

The second half of Creswell et al. explores correlations between the data taken from Hanford and Livingston around GW150914. For me, the main conclusion here is communicated in Figure 7, where Creswell et al. claim that even after removal of the GW150914 best-fit waveform there is still correlation in the data between the two observatories. This is a result I have not been able to reproduce. Nevertheless, if such a correlation were present it would suggest that we have not perfectly subtracted the real signal from the data, which would not invalidate any detection claim. There could be any number of reasons for this, for example the fact that our best-fit waveform will not exactly match what is in the data as we cannot measure all parameters with infinite precision. There might also be some deviations because the waveform models we used, while very good, are only approximations to the real signal (LIGO put out a paper quantifying this possibility). Such deviations might also be indicative of a subtle deviation from general relativity. These are of course things that LIGO is very interested in pursuing, and we have published a paper exploring potential deviations from general relativity (finding no evidence for that), which includes looking for a residual signal in the data after subtraction of the waveform (and again finding no evidence for that).

Finally, LIGO runs “unmodelled” searches, which do not search for specific signals, but instead look for any coherent non-Gaussian behaviour in the observatories. These searches actually were the first to find GW150914, and did so with remarkably consistent parameters to the modelled searches, something which we would not expect to be true if the modelled searches are “missing” some parts of the signal.

With that all said I try to reproduce Figure 7. First I begin by cross-correlating the Hanford and Livingston data, after whitening and band-passing, in a very narrow 0.02s window around GW150914. This produces the following:

There is a clear spike here at 7ms (which is GW150914), with some expected “ringing” behaviour around this point. This is a much less powerful method to extract the signal than matched-filtering, but it is completely signal independent, and illustrates how loud GW150914 is. Creswell et al. however, do not discuss their normalization of this cross-correlation, or how likely a deviation like this is to occur from noise alone. Such a study would be needed before stating that this is significant—In this case we know this signal is significant from other, more powerful, tests of the data. Then I repeat this but after having removed the best-fit waveform from the data in both observatories (using only products made available in the LOSC example notebooks). This gives:

This shows nothing interesting at all.

Section IV: Why would such correlations not invalidate the LIGO detections?

Creswell et al. claim that correlations between the Hanford and Livingston data, which in their results appear to be maximized around the time delay reported for GW150914, raised questions on the integrity of the detection. They do not. The authors claim early on in their article that LIGO data analysis assumes that the data are Gaussian, independent and stationary. In fact, we know that LIGO data are neither Gaussian nor stationary and if one reads through the technical paper accompanying the detection PRL, you can read about the many tests we run to try to distinguish between non-Gaussianities in our data and real signals. But in doing such tests, we raise an important question: “If you see something loud, how can you be sure it is not some chance instrumental artifact, which somehow was missed in the various tests that you do”. Because of this we have to be very careful when assessing the significance (in terms of sigmas—or the p-value, to use the correct term). We assess the significance using a process called time-shifts. We first look through all our data to look for loud events within the 10ms time-window corresponding to the light travel time between the two observatories. Then we look again. Except the second time we look we shift ALL of the data from Livingston by 0.1s. This delay is much larger than the light travel time so if we see any interesting “events” now they cannot be genuine astrophysical events, but must be some noise transient. We then repeat this process with a 0.2s delay, 0.3s delay and so on up to time delays on the order of weeks long. In this way we’ve conducted of order 10 million experiments. For the case of GW150914 the signal in the non-time shifted data was louder than any event we saw in any of the time-shifted runs—all 10 million of them. In fact, it was still a lot louder than any event in the time-shifted runs as well. Therefore we can say that this is a 1-in-10-million event, without making any assumptions at all about our noise. Except one. The assumption is that the analysis with Livingston data shifted by e.g. 8s (or any of the other time shifts) is equivalent to the analysis with the Livingston data not shifted at all. Or, in other words, we assume that there is nothing special about the non-time shifted analysis (other than it might contain real signals!). As well as the technical papers, this is also described in the science summary that accompanied the GW150914 PRL.

Nothing in the paper “On the time lags of the LIGO signals” suggests that the non-time shifted analysis is special. The claimed correlations between the two detectors due to resonance and calibration lines in the data would be present also in the time-shifted analyses—The calibration lines are repetitive lines, and so if correlated in the non-time shift analyses, they will also be correlated in the time-shift analyses as well. I should also note that potential correlated noise sources was explored in another of the companion papers to the GW150914 PRL. Therefore, taking the results of this paper at face value, I see nothing that calls into question the “integrity” of the GW150914 detection.

Section V: Wrapping up

I have tried to reproduce the results quoted in “On the time lags of the LIGO signals”. I find the claims of section 2 are due to an issue in how the data is Fourier transformed, and do not reproduce the correlations claimed in section 3. Even if taking the results at face value, it would not affect the 5-sigma confidence associated with GW150914. Nevertheless I am in contact with the authors and we will try to understand these discrepancies.

For people interested in trying to explore LIGO data, check out the LIGO Open Science Center tutorials. As someone who was involved in the internal review of the LOSC data products it is rewarding to see these materials being used. It is true that these tutorials are intended as an introduction to LIGO data analysis, and do not accurately reflect many of the intricacies of these studies. For the interested reader a number of technical papers, for example this one, accompany the main PRL and within this paper and its references you can find all the nitty-gritty about how our analyses work. Finally, the PyCBC analysis toolkit, which was used to obtain the 5-sigma confidence, and of which I am one of the primary developers, is available open-source on git-hub. There are instructions here and also a number of examples that illustrate a number of aspects of our data analysis methods.

This article was circulated in the LIGO-Virgo Public Outreach and Education mailing list before being made public, and I am grateful to comments and feedback from: Christopher Berry, Ofek Birnholtz, Alessandra Buonanno, Gregg Harry, Martin Hendry, Daniel Hoak, Daniel Holz, David Keitel, Andrew Lundgren, Harald Pfeiffer, Vivien Raymond, Jocelyn Read and David Shoemaker.

Congratulations to Grant and Jason!

Advising graduate students as they make the journey from learners to working scientists is one of the great pleasures and privileges of academic life. Last week featured the Ph.D. thesis defenses of not one, but two students I’ve been working with, Grant Remmen (who was co-advised by Cliff Cheung) and Jason Pollack. It will be tough to see them go — both got great postdocs, Grant accepting a Miller Fellowship at Berkeley, and Jason heading to the University of British Columbia — but it’s all part of the cycle of life.

Jason Pollack (L), Grant Remmen (R), and their proud advisor.

Of course we advisors love all of our students precisely equally, but it’s been a special pleasure to have Jason and Grant around for these past five years. They’ve helped me enormously in many ways, as we worked to establish a research program in the foundations of quantum gravity and the emergence of spacetime. And along the way they tallied up truly impressive publication records (GR, JP). I especially enjoy that they didn’t just write papers with me, but also with other faculty, and with fellow students without involving professors at all.

I’m very much looking forward to seeing how Jason and Grant both continue to progress and grow as theoretical physicists. In the meantime, two more champagne bottles get added to my bookshelf, one for each Ph.D. student — Mark, Eugene, Jennifer, Ignacy, Lotty, Heywood, Chien-Yao, Kim, and now Grant and Jason.

Congrats!

The Big Picture: Paperback Day

I presume most readers of this blog have already purchased their copy of The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. If you’re really dedicated, you have the hardback version and the ebook and the audiobook, as well as a few spare copies stashed here and there in case of emergency.

Today we’re happy to announce that you can finally complete your set by purchasing the paperback edition of TBP. The cover is even shinier than before! Paperbacks, as we all know, make great gifts, whether as romantic tokens for the special someone in your life, or gestures of conciliation toward your bitter enemies.

I have to confess that I not only had great fun writing this book, but have been quite gratified by its reception. Of course there were doubters — and regretfully, most of the doubters have seemed to argue against their own preconceptions of what they thought the book would say, rather than what it actually did say. But a good number of people have not only enjoyed the book, but engaged with its ideas in a serious way. Here are some reviews that came out after hardcover publication a year ago:

In case you still aren’t sure what the book is about (it’s about matching the fundamental laws of nature to the world of our everyday experience), here are the brief discussions of the individual sections we had right here on the blog:

  1. Part One: Cosmos
  2. Part Two: Understanding
  3. Part Three: Essence
  4. Part Four: Complexity
  5. Part Five: Thinking
  6. Part Six: Caring

Or if you’re more audiovisually inclined, a talk I gave at LogiCal-LA back in January of this year:

Thanks to everyone who has bought the book and engaged with it in thoughtful ways. It’s been a great ride.

Guest Post: Nathan Moynihan on Amplitudes for Astrophysicists

As someone who sits at Richard Feynman’s old desk, I take Feynman diagrams very seriously. They are a very convenient and powerful way of answering a certain kind of important physical question: given some set of particles coming together to interact, what is the probability that they will evolve into some specific other set of particles?

Unfortunately, actual calculations with Feynman diagrams can get unwieldy. The answers they provide are only approximate (though the approximations can be very good), and making the approximations just a little more accurate can be a tremendous amount of work. Enter the “amplitudes program,” a set of techniques for calculating these scattering probabilities more directly, without adding together a barrel full of Feynman diagrams. This isn’t my own area, but we’ve  had guest posts from Lance Dixon and  Jaroslav Trnka about this subject a while back.

But are these heady ideas just brain candy for quantum field theorists, or can they be applied more widely? A very interesting new paper just came out that argued that even astrophysicists — who usually deal with objects a lot bigger than a few colliding particles — can put amplitude technology to good use! And we’re very fortunate to have a guest post on the subject by one of the authors, Nathan Moynihan. Nathan is a grad student at the University of Cape Town, studying under Jeff Murugan and Amanda Weltman, who works on quantum field theory, gravity, and information. This is a great introduction to the application of cutting-edge mathematical physics to some (relatively) down-to-Earth phenomena.


In a recent paper, my collaborators and I (Daniel Burger, Raul Carballo-Rubio, Jeff Murugan and Amanda Weltman) make a case for applying modern methods in scattering amplitudes to astrophysical and cosmological scenarios. In this post, I would like to explain why I think this is interesting, and why you, if you’re an astrophysicist or cosmologist, might want to use the techniques we have outlined.

In a scattering experiment, objects of known momentum p are scattered from a localised target, with various possible outcomes predicted by some theory. In quantum mechanics, the probability of a particular outcome is given by the square of the scattering amplitude, a complex number that can be derived directly from the theory. Scattering amplitudes are the central quantities of interest in (perturbative) quantum field theory, and in the last 15 years or so, there has been something of a minor revolution surrounding the tools used to calculate these quantities (partially inspired by the introduction of twistors into string theory). In general, a particle theorist will make a perturbative expansion of the path integral of her favorite theory into Feynman diagrams, and mechanically use the Feynman rules of the theory to calculate each diagram’s contribution to the final amplitude. This approach works perfectly well, although the calculations are often tough, depending on the theory.

Astrophysicists, on the other hand, are often not concerned too much with quantum field theories, preferring to work with the classical theory of general relativity. However, it turns out that you can, in fact, do the same thing with general relativity: perturbatively write down the Feynman diagrams and calculate scattering amplitudes, at least to first or second order. One of the simplest scattering events you can imagine in pure gravity is that of two gravitons scattering off one another: you start with two gravitons, they interact, you end up with two gravitons. It turns out that you can calculate this using Feynman diagrams, and the answer turns out to be strikingly simple, being barely one line.

The calculation, on the other hand, is utterly vicious. An unfortunate PhD student by the name of Walter G Wesley was given this monstrous task in 1963, and he found that to calculate the amplitude meant evaluating over 500 terms, which as you can imagine took the majority of his PhD to complete (no Mathematica!). The answer, in the end, is breathtakingly simple. In the centre of mass frame, the cross section for this amplitude is:

\frac{d\sigma}{d\Omega} = 4G^2E^2 \frac{\cos^{12}\frac12\theta}{\sin^{4}\frac12\theta}

Where G is Newton’s constant, E is the energy of the gravitons, and \theta is the scattering angle. The fact that they are so cumbersome has meant that many in the astrophysics community may eschew calculations of this type.

However, the fact that the answer is so simple implies that there may be an easier route to calculation than evaluating all 500 diagrams. Indeed, this is exactly what we have tried to allude to in our paper: using the methods we have outlined, this calculation can be done on half a page and with little effort.

The technology that we introduce can be summed up as follows and should be easily recognised as common tools present in any physicists arsenal: a change of variables (for the more mathematically inclined reader, a change of representation), recursion relations and complex analysis. Broadly speaking, the idea is to take the basic components of Feynman diagrams: momentum vectors and polarisation vectors (or tensors) and to represent them as spinors, objects that are inherently complex in nature (they are elements of a complex vector space). Once we do that, we can utilise the transformation rules of spinors to simplify calculations. This simplifies things a bit, but we can do better.

The next trick is to observe that Feynman diagrams all have one common feature: they contain real singularities (meaning the amplitude blows up) wherever there are physical internal lines. These singularities usually show up as poles, functions that behave like 1/z^n. Typically, an internal line might contribute a factor like \frac{1}{p^2 + m^2}, which obviously blows up around p^2 = -m^2.

We normally make ourselves feel better by taking these internal lines to be virtual, meaning they don’t correspond to physical processes that satisfy the energy-momentum condition p^2 = -m^2 and thus never blow up. In contrast to this, the modern formulation insists that internal lines do satisfy the condition, but that the pole is complex. Now that we have complex poles, we can utilise the standard tools of complex analysis, which tells us that if you know about the poles and residues, you know everything. For this to work, we are required to insist that at least some of external momentum is complex, since the internal momentum depends on the external momentum. Thankfully, we can do this in such a way that momentum conservation holds and that the square of the momentum is still physical.

The final ingredient we need are known as the BCFW recursion relations, an indispensable tool used by the amplitudes community developed by Britto, Cachazo, Feng and Witten in 2005. Roughly speaking, these relations tell us that we can turn a complex, singular, on-shell amplitude of any number of particles into a product of 3-particle amplitudes glued together by poles. Essentially, this means we can treat amplitudes like lego bricks and stick them together in an intelligent way in order to construct a really-difficult-to-compute amplitude from some relatively simple ones.

In the paper, we show how this can be achieved using the example of scattering a graviton off a scalar. This interaction is interesting since it’s a representation of a gravitational wave being ‘bent’ by the gravitational field of a massive object like a star. We show, in a couple of pages, that the result calculated using these methods exactly corresponds with what you would calculate using general relativity.

If you’re still unconvinced by the utility of what I’ve outlined, then do look out for the second paper in the series, hopefully coming in the not too distant future. Whether you’re a convert or not, our hope is that these methods might be useful to the astrophysics and cosmology communities in the future, and I would welcome any comments from any members of those communities.