March 2014

Guest Post: Jaroslav Trnka on the Amplituhedron

Usually, technical advances in mathematical physics don’t generate a lot of news buzz. But last year a story in Quanta proved to be an exception. It relayed the news of an intriguing new way to think about quantum field theory — a mysterious mathematical object called the Amplituhedron, which gives a novel perspective on how we think about the interactions of quantum fields.

jaroslav This is cutting-edge stuff at the forefront of modern physics, and it’s not an easy subject to grasp. Natalie Wolchover’s explanation in Quanta is a great starting point, but there’s still a big gap between a popular account and the research paper, in this case by Nima Arkani-Hamed and Jaroslav Trnka. Fortunately, Jaroslav is now a postdoc here at Caltech, and was willing to fill us in on a bit more of the details.

“Halfway between a popular account and a research paper” can still be pretty forbidding for the non-experts, but hopefully this guest blog post will convey some of the techniques used and the reasons why physicists are so excited by these (still very tentative) advances. For a very basic overview of Feynman diagrams in quantum field theory, see my post on effective field theory.


I would like to thank Sean to give me an opportunity to write about my work on his blog. I am happy to do it, as the new picture for scattering amplitudes I have been looking for in last few years just recently crystalized in the object we called Amplituhedron, emphasizing its connection to both scattering amplitudes and the generalization of polyhedra. To remind you, “amplitudes” in quantum field theory are functions that we square to get probabilities in scattering experiments, for example that two particles will scatter and convert into two other particles.

Despite the fact that I will talk about some specific statements for scattering amplitudes in a particular gauge theory, let me first mention the big picture motivation for doing this. Our main theoretical tool in describing the microscopic world is Quantum Field Theory (QFT), developed more than 60 years ago in the hands of Dirac, Feynman, Dyson and others. It unifies quantum mechanics and special theory of relativity in a consistent way and it has been proven to be an extremely successful theory in countless number of cases. However, over the past 25 years there has been an increasing evidence that the standard definition of QFT using Lagrangians and Feynman diagrams does not exhibit simplicity and sometimes even hidden symmetries of the final result. This has been most dramatically seen in calculation of scattering amplitudes, which are basic objects directly related to probabilities in scattering experiments. For a very nice history of the field look at the blog post by Lance Dixon who recently won together with Zvi Bern and David Kosower the Sakurai prize. There are also two nice popular articles by Natalie Wolchover – on the Amplituhedron and also the progress of understanding amplitudes in quantum gravity.

The Lagrangian formulation of QFT builds on two pillars: locality and unitarity, which means that the particle interactions are point-like and sum of the probabilities in scattering experiments must be equal to one. The underlying motivation of my work is a very ambitious attempt to reformulate QFT using a different set of principles and see locality and unitarity emerge as derived properties. Obviously I am not going to solve this problem but rather concentrate on a much simpler problem whose solution might have some features that can be eventually generalized. In particular, I will focus on on-shell (“real” as opposed to “virtual”) scattering amplitudes of massless particles in a “supersymmetric cousin” of Quantum Chromodynamics (theory which describes strong interactions) called N=4 Super Yang-Mills theory (in planar limit). It is a very special theory sometimes referred as “Simplest Quantum Field Theory” because of its enormous amount of symmetry. If there is any chance to pursue our project further we need to do the reformulation for this case first.

Feynman diagrams give us rules for how to calculate an amplitude for any given scattering process, and these rules are very simple: draw all diagrams built from vertices given by the Lagrangian and evaluate them using certain rules. This gives a function M of external momenta and helicities (which is spin for massless particles). The Feynman diagram expansion is perturbative, and the leading order piece is always captured by tree graphs (no loops). Then we call M a tree amplitude, which is a rational function of external momenta and helicities. In particular, this function only depends on scalar products of momenta and polarization vectors. The simplest example is the scattering of three gluons,

M_3 = \epsilon(p_1)\cdot \epsilon(p_2)(p_1-p_2)\cdot p_3 + \epsilon(p_2)\cdot \epsilon(p_3)(p_2-p_3)\cdot p_1 + \epsilon(p_3)\cdot \epsilon(p_1)(p_3-p_1)\cdot p_2

represented by a single Feynman diagram.

three-gluons

Amplitudes for more than three particles are sums of Feynman diagrams which have internal lines represented by factors P^2 (where P is the sum of momenta) in the denominator. For example, one part of the amplitude for four gluons (2 gluons scatter and produce another 2 gluons) is

\displaystyle M_4 = \frac{\epsilon(p_1)\cdot \epsilon(p_2) \epsilon(p_3)\cdot \epsilon(p_4)}{(p_1+p_2)^2} + \dots

four-gluons

Higher order corrections are represented by diagrams with loops which contain unfixed momenta – called loop momenta – we need to integrate over, and the final result is represented by more complicated functions – polylogarithms and their generalizations. The set of functions we get after loop integrations are not known in general (even for lower loop cases). However, there exists a simpler but still meaningful function for loop amplitudes – the integrand, given by a sum of all Feynman diagrams before integration. This is a rational function of helicities and momenta (both external and loop momenta) and it has many nice properties which are similar to tree amplitudes. Tree amplitudes and Integrand of loop amplitudes are objects of our interest and I will call them just “amplitudes” in the rest of the text.

While we already have a new picture for them, we can use the top-bottom approach and phrase the problem in the following way: We want to find a mathematical question to which the amplitude is the answer.

As a first step, we need to characterize how the amplitude is invariantly defined in a traditional way. The answer is built in the standard formulation of QFT: the amplitude is specified by properties of locality and unitarity, which translate to simple statements about poles (these are places where the denominator goes to zero). In particular, all poles of M must be sums of external (for integrand also loop) momenta and on these poles M must factorize in a way which is dictated by unitarity. For large class of theories (including our model) this is enough to specify M completely. Reading backwards, if we find a function which satisfies these properties it must be equal to the amplitude. This is a crucial point for us and it guarantees that we calculate the correct object.

Now we consider completely unrelated geometry problem: we define a new geometrical shape – the Amplituhedron. It is something like multi-dimensional polygon embedded in a particular geometrical space, called the Grassmannian. This has a very good motivation in the work done by me and my collaborators in last 5 years on relation between Grassmannians and amplitudes, but I will not explain it here in more details as it would need a separate blog post. Importantly, we can prove that the expression we get for a volume of this object satisfies properties mentioned above and therefore, we can conclude that the scattering amplitudes in our theory are directly related to the volume of Amplituhedron.

polygon This is a basic picture of the whole story but I will try to elaborate it a little more. Many features of the story can be show on a simple example of polygon which is also a simple version of Amplituhedron. Let us consider n points in a (projective) plane and draw a polygon by connecting them in a given ordering. In order to talk about the interior the polygon must be convex which puts some restrictions on these n vertices. Our object is then specified as a set of all points inside a convex polygon.

Now, we want to generalize it to Grassmannian. Instead of points we consider lines, planes, and in general k-planes inside a convex hull (generalization of polygon in higher dimensions). The geometry notion of being “inside” does not really generalize beyond points but there is a precise algebraic statement which directly generalizes from points to k-planes. It is a positivity condition on a matrix of coefficients that we get if we expand a point inside a polygon as linear combination of vertices. In the end, we can define a space of Amplituhedron in the same way as we defined a convex polygon by putting constraints on its vertices (which generalizes convexity) and also positivity conditions on the k-plane (which generalizes notion being inside). In general, there is not a single Amplituhedron but it is rather labeled by three indices: n,k,l. Here n stands for the number of particles, which is equal to a number of vertices, index k captures the helicity structure of the amplitude and it defines a dimensionality of a k-plane which defines a space. Finally, l is the number of loops which translates to the number of lines we have in our configuration space in addition to the k-plane. In the next step we define a volume, more precisely it is a form with logarithmic singularities on the boundaries of this space, and we can show that this function satisfies exactly the same properties as the scattering amplitude. For more details you can read our original paper.

This is a complete reformulation we were looking for. In the definition we do not talk about Lagrangians, Feynman diagrams or locality and unitarity. Our definition is purely geometrical with no reference to physical concepts which all emerge from the shape of Amplituhedron.

Having this definition in hand does not give the answer for amplitudes directly, but it translates the physics problem to purely math problem – calculating volumes. Despite the fact that this object has not been studied by mathematicians at all (there are recent works on the positive Grassmannian of which the Amplituhedron is a substantial generalization), it is reasonable to think that this problem might have a nice general solution which would provide all-loop order results.

There are two main directions in generalization of this story. The first is to try to extend the picture to full (integrated) amplitudes rather than just an integrand. This would definitely require more complicated mathematical structures as would deal now with polylogarithms and their generalizations rather than rational functions. However, already now we have some evidence that the story should extend there as well. The other even more important direction is to generalize this picture to other Quantum field theories. The answer is unclear but if it is positive the picture would need some substantial generalization to capture richness of other QFTs which are absent in our model (like renormalization).

The story of Amplituhedron has an interesting aspect which we always emphasized as a punchline of this program: emergence of locality and unitarity from a shape of this geometrical object, in particular from positivity properties that define it. Of course, amplitudes are local and unitary but this construction shows that these properties might not be fundamental, and can be replaced by a different set of principles from which locality and unitarity follow as derived properties. If this program is successful it might be also an important step to understand quantum gravity. It is well known that quantum mechanics and gravity together make it impossible to have local observables. It is conceivable that if we are able to formulate QFT in the language that does not make any explicit reference to locality, the weak gravity limit of the theory of quantum gravity might land us on this new formulation rather than on the standard path integral formulation. This would not be the first time when the reformulation of the existing theory helped us to do the next step in our understanding of Nature. While Newton’s laws are manifestly deterministic, there is a completely different formulation of classical mechanics – in terms of the principle of the least action – which is not manifestly deterministic. The existence of these very different starting points leading to the same physics was somewhat mysterious to classical physicists, but today we know why the least action formulation exists: the world is quantum-mechanical and not deterministic, and for this reason, the classical limit of quantum mechanics can’t immediately land on Newton’s laws, but must match to some formulation of classical physics where determinism is not a central but derived notion. The least action principle formulation is thus much closer to quantum mechanics than Newton’s laws, and gives a better jumping off point for making the transition to quantum mechanics as a natural deformation, via the path integral.

We may be in a similar situation today. If there is a more fundamental description of physics where space-time and perhaps even the usual formulation of quantum mechanics don’t appear, then even in the limit where non-perturbative gravitational effects can be neglected and the physics reduces to perfectly local and unitary quantum field theory, this description is unlikely to directly reproduce the usual formulation of field theory, but must rather match on to some new formulation of the physics where locality and unitarity are derived notions. Finding such reformulations of standard physics might then better prepare us for the transition to the deeper underlying theory.

26 Comments

Naturalness in the NYT

In the wake of the announcement of gravitational-wave signatures from inflation in the cosmic microwave background, I was invited to contribute a piece to The Stone section of the New York Times, on “naturalness” and how it’s used in physics. Mostly The Stone is devoted to philosophy, but occasionally they’ll let an outsider opine about a philosophical-sounding topic.

The hook is obviously the fact that inflation itself is motivated by naturalness:

Cosmic inflation is an extraordinary extrapolation. And it was motivated not by any direct contradiction between theory and experiment, but by the simple desire to have a more natural explanation for the conditions of the early universe. If these observations favoring inflation hold up — a big “if,” of course — it will represent an enormous triumph for reasoning based on the search for naturalness in physical explanations.

I conclude with:

Naturalness is a subtle criterion. In the case of inflationary cosmology, the drive to find a natural theory seems to have paid off handsomely, but perhaps other seemingly unnatural features of our world must simply be accepted. Ultimately it’s nature, not us, that decides what’s natural.

I like to capitalize “Nature,” but nobody agrees with me.

30 Comments

A Great Time for Reason and Science

Here I am at an extremely stimulating meeting on gravity and quantum spacetime in Santa Barbara, but I skipped yesterday’s afternoon session to talk on the PBS News Hour about the new inflation results:

Evidence of cosmic inflation expands universe understanding

There’s a great parallel (if the BICEP2 result holds up!) between Monday’s evidence for inflation and the Higgs discovery back in 2012. When talking about the Higgs, I like to point out the extraordinary nature of the accomplishment of those physicists (Anderson, Englert, Brout, Higgs, Guralnik, Hagen, Kibble) who came up with the idea back in the early 1960’s. They were thinking about a fairly general question: how can you make forces of nature (like the nuclear forces) that don’t obey an inverse square law, but instead only stretch over a short distance? They weren’t lucky enough to have specific, detailed experimental guidance; just some basic principles and an ambitious goal. And they (independently!) proposed a radical idea: empty space is suffused with an invisible energy field that affects the behavior of other fields in space in a profound way. A crazy-sounding idea, and one that was largely ignored for quite a while. Gradually physicists realized that it was actually quite promising, and we spent billions of dollars and many thousands of scientist-years of effort to test the idea. Finally, almost half a century later, a tiny bump on a couple of plots showed they were right.

The inflation story is similar. Alan Guth was thinking about some very general features of the universe: the absence of monopoles, the overall smoothness and flatness. And he proposed an audacious idea: in its very earliest moments, the universe was driven by the potential energy of some quantum field to expand at an accelerated rate, smoothing things out and diluting unwanted relics like monopoles. Unlike the Higgs idea, inflation caught on quite quickly, and people soon realized that it helped explain the origin of density perturbations and (potentially) gravitational-wave fluctuations. Inflation became the dominant idea in early-universe cosmology, but it was always a wild extrapolation away from known physics. If BICEP2 is right, the energy scale of inflation is 0.01 times the Planck scale. The Large Hadron Collider, our highest-energy accelerator here on Earth, reaches energies of 0.00000000000001 times the Planck scale. We really have (had) no right to think that our cute little speculations about what the universe was doing at such scales were anywhere near the right track.

But apparently they were. Over thirty years later, thanks to the dedication of very talented experimenters and millions of dollars of (public) funding, another bump on a plot seems to be confirming that original audacious idea.

It’s the power of reason and science. We tell stories about how the universe works, but we don’t simply tell any old stories that come to mind; we are dramatically constrained by experimental data and by consistency with the basic principles we think we do understand. Those constraints are enormously powerful — enough that we can sit at our desks, thinking hard, extending our ideas way beyond anything we’ve directly experienced, and come up with good ideas about how things really work. Most such ideas don’t turn out to be right — that’s science for you — but some of them do.

Science is a dialogue between the free play of ideas — theorizing — and the harsh constraints of empiricism — experimental data. Theories are a lever, data are a fulcrum, and between them we can move the world.

53 Comments

BICEP2 Updates

Here are the main results on gravitational waves/B-modes from the CMB, as reported by the BICEP2 experiment. For background see my previous post. All of the BICEP2 results and plots are here.

First, the best fit to r, the ratio of gravitational waves to density perturbations:

bicep-r

Rumors were right, and r = 0.2 is the best fit (with errors plus .07, minus .05). Here is the power spectrum (amplitude as a function of angular scale on the sky):

bicep-spectrum

And here we have the contours in r/nS space, analogous to the Planck constraints we showed yesterday. Note that this plot crucially allows for “running” of the spectrum of density perturbations signal — the size of the fluctuations changes in a more complicated way than as a power law in wavelength. If running weren’t included, the different constraints would seem more incompatible.

running_rvsns

Comparison with other limits (BICEP2 results are black dots at bottom):

b2_and_previous_limits

Finally, here is a map of the actual B-mode part of the signal:

BICEP2-signal

Overall, an amazing result (if it holds up!). It implies that the energy scale of inflation is about 2×1016 GeV — pretty close to the Planck scale (2×1018 GeV). An unprecedented view of the earliest moments in the history of the universe.


[From earlier.] Here is an email from BICEP2 PI John Kovac:

Dear friends and colleagues,

We invite you to join us tomorrow (Monday, 17 March) for a special webcast presenting the first results from the BICEP2 CMB telescope. The webcast will begin with a presentation for scientists 10:45-11:30 EDT, followed by a news conference 12:00-1:00 EDT.

You can join the webcast from the link at http://www.cfa.harvard.edu/news/news_conferences.html.

Papers and data products will be available at 10:45 EDT from http://bicepkeck.org.

We apologize for any duplicate copies of this notice, and would be grateful if you would help share this beyond our limited lists to any colleagues who may be interested within our CMB and broader science communities.

thank you,
John Kovac, Clem Pryke, Jamie Bock, Chao-Lin Kuo

on behalf of
The BICEP2 Collaboration

56 Comments

Gravitational Waves in the Cosmic Microwave Background

Major announcement coming!

[Update: Of course by now the announcement has come, of the discovery of signatures of gravitational waves in the cosmic microwave background by the BICEP2 experiment, more or less as the post below surmised. This follow-up post features the main result plots from the announcement.]

That much is clear, from this press release: on Monday at noon Eastern time, astronomers will “announce a major discovery.” No evidence from that page what the discovery actually is. But if you’re friends with a lot of cosmologists on Facebook/Twitter (or if you just read the Guardian), you’ve heard the rumor: the BICEP2 experiment has purportedly detected signs of gravitational waves in the polarization of the cosmic microwave background radiation. If it’s true (and the result holds up), it will be an enormously important clue about what happened at the very earliest moments of the Big Bang. Normally I wouldn’t be spreading rumors, but once it’s in the newspapers, I figure why not? And in the meantime we can think about what such a discovery would mean, regardless of what the announcement actually turns out to be (and whether the result holds up). See also Richard Easther, Résonaances, Sesh Nadathur, Philip Gibbs, Shaun Hotchkiss, and Peter Coles. At a slightly more technical level, Daniel Baumann has a slide-show review.

Punchline: other than finding life on other planets or directly detecting dark matter, I can’t think of any other plausible near-term astrophysical discovery more important than this one for improving our understanding of the universe. It would be the biggest thing since dark energy. (And I might owe Max Tegmark $100 — at least, if Planck confirms the result. I will joyfully pay up.) Note that the big news here isn’t that gravitational waves exist — of course they do. The big news is that we have experimental evidence of something that was happening right when our universe was being born.

BICEP2 at the South Pole.
BICEP2 at the South Pole.

Cosmic inflation is actually a pretty simple idea. Very early on–we’re not sure exactly when, but plausibly 10-35 seconds or less after the Planck time–the universe went through a phase of accelerated expansion for some reason or another. There are many models for what could have caused such a phase; sorting them out is exactly what we’re trying to do here. The basic effect of this inflationary era is to smooth things out: stuff like density perturbations, spatial curvature, and unwanted relics just get diluted away. Then at some point–again, we aren’t sure when or why–this period ends, and the energy that was driving the accelerated expansion converts into ordinary matter and radiation, and the conventional Hot Big Bang story begins.

Except that quantum mechanics says that we can’t completely smooth things out. The Heisenberg uncertainty principle tells us that there will always be an irreducible minimum amount of jiggle in any quantum system, even when it’s in its lowest-energy (“vacuum”) state. In the context of inflation, that means that quantum fields that are relatively light (low mass) will exhibit fluctuations. (Gauge fields like photons are an exception, due to symmetries that we don’t need to go into right now.)

So inflation makes certain crude predictions, which have come true: the universe is roughly homogeneous, and the curvature of space is very small. But the perturbations on top of this basic smoothness provide more specific, quantitative information, and offer more tangible hope of learning more about the inflationary era (including whether inflation happened at all).

There are two types of perturbations we expect to see, based on two kinds of light quantum fields that fluctuated during inflation: the “inflaton” field itself, and the gravitational field. We don’t know what field it is that drove inflation, so we just call it the “inflaton” and try to determine its properties from observation. It’s the inflaton that eventually converts into matter and radiation, so the inflaton fluctuations produce fluctuations in the density of the early plasma (sometimes called “scalar” fluctuations). These are what we have already seen in the Cosmic Microwave Background (CMB), the leftover radiation from the Big Bang. Maps like this one from the Planck satellite show differences in temperature from point to point in the CMB, and it’s these small difference (about one part in 105) that grow into stars, galaxies and clusters as the universe expands. …

109 Comments

Einstein and Pi

Each year, the 14th of March is celebrated by scientifically-minded folks for two good reasons. First, it’s Einstein’s birthday (happy 135th, Albert!). Second, it’s Pi Day, because 3/14 is the closest calendrical approximation we have to the decimal expansion of pi, π =3.1415927….

Both of these features — Einstein and pi — are loosely related by playing important roles in science and mathematics. But is there any closer connection?

Of course there is. We need look no further than Einstein’s equation. I mean Einstein’s real equation — not E=mc2, which is perfectly fine as far as it goes, but a pretty straightforward consequence of special relativity rather than a world-foundational relationship in its own right. Einstein’s real equation is what you would find if you looked up “Einstein’s equation” in the index of any good GR textbook: the field equation relating the curvature of spacetime to energy sources, which serves as the bedrock principle of general relativity. It looks like this:

einstein-eq

It can look intimidating if the notation is unfamiliar, but conceptually it’s quite simple; if you don’t know all the symbols, think of it as a little poem in a foreign language. In words it is saying this:

(gravity) = 8 π G × (energy and momentum).

Not so scary, is it? The amount of gravity is proportional to the amount of energy and momentum, with the constant of proportionality given by 8πG, where G is a numerical constant.

Hey, what is π doing there? It seems a bit gratuitous, actually. Einstein could easily have defined a new constant H simply be setting H=8πG. Then he wouldn’t have needed that superfluous 8π cluttering up his equation. Did he just have a special love for π, perhaps based on his birthday?

The real story is less whimsical, but more interesting. Einstein didn’t feel like inventing a new constant because G was already in existence: it’s Newton’s constant of gravitation, which makes perfect sense. General relativity (GR) is the theory that replaces Newton’s version of gravitation, but at the end of the day it’s still gravity, and it has the same strength that it always did.

So the real question is, why does π make an appearance when we make the transition from Newtonian gravity to general relativity?

Well, here’s Newton’s equation for gravity, the famous inverse square law:

inverse-square

It’s actually similar in structure to Einstein’s equation: the left hand side is the force of gravity between two objects, and on the right we find the masses m1 and m2 of the objects in question, as well as the constant of proportionality G. (For Newton, mass was the source of gravity; Einstein figured out that mass is just one form of energy, and upgraded the source of gravity to all forms of energy and momentum.) And of course we divide by the square of the distance r between the two objects. No π’s anywhere to be found.

It’s a great equation, as physics equations go; one of the most influential in the history of science. But it’s also a bit puzzling, at least philosophically. It tells a story of action at a distance — two objects exert a gravitational force on each other from far away, without any intervening substance. Newton himself considered this to be an unacceptable state of affairs, although he didn’t really have a good answer:

That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro’ a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it.

But there is an answer to this conundrum. It’s to shift one’s focus from the force of gravity, F, to the gravitational potential field, Φ (Greek letter “phi”), from which the force can be derived. The field Φ fills all of space, taking some specific value at every point. In the vicinity of a single body of mass M, the gravitational potential field is given by this equation:

grav-potential

This equation bears a close resemblance to Newton’s original one. It depends inversely on the distance, rather than the distance squared, because it’s not the gravitational force directly; the force is given by the derivative (slope) of the field, which turns 1/r into 1/r2.

That’s nice, since we’ve replaced the spookiness of action at a distance with the pleasantly mechanical notion of a field filling all of space. Still no π’s, though.

But our equation only tells us what happens when we have a single body with mass M. What if we have many objects, each creating its own gravitational field, or for that matter a gas or fluid spread throughout some region? Then we need to talk about the mass density, or the amount of mass per each little volume of space, conventionally denoted ρ (Greek letter “rho”). And indeed there is an equation that relates the gravitational potential field to an arbitrary mass density spread throughout space, known as Poisson’s equation:

poisson-eq

The upside-down triangle is the gradient operator (here squared to make the Laplacian); it’s a fancy three-dimensional way of saying how the field is changing through space (its vectorial derivative). But even more exciting, π has now appeared on the right-hand side! Why is that?

There is a technical mathematical explanation, of course, but here is the rough physical explanation. Whereas we were originally concerned (in Newton’s equation or the first equation for Φ) with the gravitational effect of a single body at a distance r, we’re now adding up all the accumulated effects of everything in the universe. That “adding up” (integrating) can be broken into two steps: (1) add up all the effects at some fixed distance r, and (2) add up the effects from all distances. In that first step, all the points at some distance r from any fixed location define a sphere centered on that location. So we’re really adding up effects spread over the area of a sphere. And the formula for the area of a sphere, of course, is:

area-sphere

Seems almost too trivial, but that’s really the answer. The reason π comes into Poisson’s equation and not Newton’s is that Newton cared about the force between two specific objects, while Poisson tells us how to calculate the potential as a function of a matter density spread all over the place, and in three dimensions “all over the place” means “all over the area of a sphere” and then “adding up each sphere.” (We add up spheres, rather than cubes or whatever, because spheres describe fixed distances from the point of interest, and gravity depends on distance.) And the area of a sphere, just like the circumference of a circle, is proportional to π.

isq

So then what about Einstein? Back in Newtonian gravity, it was often convenient to use the gravitational potential field, but it wasn’t really necessary; you could always in principle calculate the gravitational force directly. But when Einstein formulated general relativity, the field concept became absolutely central. The thing one calculates is not the force due to gravity (indeed, there’s a sense in which gravity isn’t really a “force” in general relativity), but rather the geometry of spacetime. That is fixed by the metric tensor field, a complicated beast that includes as a subset what we call the gravitational potential field. Einstein’s equation is directly analogous to Poisson’s equation, not to Newton’s.

So that’s the Einstein-Pi connection. Einstein figured out that gravity is best described by a field theory rather than as a direct interaction between individual bodies, and connecting fields to localized bodies involves integrating over the surface of a sphere, and the area of a sphere is proportional to π. The whole birthday thing is just a happy accident.

53 Comments

A Bit of Physics History: Ed Witten Introduces M-Theory

The Second Superstring Revolution was, like most revolutions, a somewhat messy affair, with a number of pivotal steps along the way: understanding the role of membranes in 11-dimensional supergravity, the discovery of dualities in supersymmetric gauge theories, Polchinski’s appreciation of D-branes as dynamical extended objects in string theory, and of course Maldacena’s formulation of the AdS/CFT correspondence. But perhaps the high point was Ed Witten’s formulation of M-Theory in 1995. And I just noticed that Witten sharing it with the world was captured on video.

Here is Witten’s paper:

String Theory Dynamics In Various Dimensions
Edward Witten

The strong coupling dynamics of string theories in dimension d≥4 are studied. It is argued, among other things, that eleven-dimensional supergravity arises as a low energy limit of the ten-dimensional Type IIA superstring, and that a recently conjectured duality between the heterotic string and Type IIA superstrings controls the strong coupling dynamics of the heterotic string in five, six, and seven dimensions and implies S duality for both heterotic and Type II strings.

Before this result, we knew about five different kinds of string theory, each living in ten dimensions: Type I, two different Type II’s, and two different “heterotic” theories. Then there was the most symmetric form of supergravity, living in 11 dimensions, which some people thought was interesting but others thought was a curiosity that had been superseded by string theory. To everyone’s amazement, Witten showed that all of these theories are simply different limiting cases of a single underlying structure. Nobody knows what that underlying theory really is (although there are a few different formulations that work in some contexts), but we know what to call it: M-theory.

mtheory

Now Amanda Gefter, author of the new book Trespassing on Einstein’s Lawn (and a recent guest-blogger at Cocktail Party Physics), takes to Twitter to point out something I wasn’t aware of: a video record of Witten’s famous 1995 talk at USC. (I’m pretty sure this is the celebrated talk, but my confidence isn’t 100%.) [Update: folks who should know are actually saying it might be a seminar soon thereafter at Stony Brook. Witten himself admits that he’s not sure.] It’s clearly a recording by someone in the audience, but I don’t know who.

Most physics seminars are, shall we say, not all that historically exciting. But this one was recognized right away as something special. I was a postdoc at MIT at the time, and not in the audience myself, but I remember distinctly how the people who were there were buzzing about it when they returned home.

Nature giveth, and Nature taketh away. The 1995 discovery of M-theory made string theory seem more promising than ever, to the extent that just a single theory, rather than five or six. Then the 1998 discovery that the universe is accelerating made people take more seriously the idea that there might be more than one way to compactify those extra dimensions down to the four we observe — and once you have more than one, you sadly end up with a preposterously high number (the string theory landscape). So even if there is only one unifying theory of everything, there seem to be a bajillion phases it can be in, which creates an enormous difficulty in trying to relate M-theory to reality. But we won’t know unless we try, will we?

15 Comments

Guest Post: Katherine Freese on Dark Matter Developments

Katherine Freese The hunt for dark matter has been heating up once again, driven (as usual) by tantalizing experimental hints. This time the hints are coming mainly from outer space rather than underground laboratories, which makes them harder to check independently, but there’s a chance something real is going on. We need more data to be sure, as scientists have been saying since the time Eratosthenes measured the circumference of the Earth.

As I mentioned briefly last week, Katherine Freese of the University of Michigan has a new book coming out, The Cosmic Cocktail, that deals precisely with the mysteries of dark matter. Katie was also recently at the UCLA Dark Matter Meeting, and has agreed to share some of her impressions with us. (She also insisted on using the photo on the right, as a way of reminding us that this is supposed to be fun.)


Dark Matter Everywhere (at the biannual UCLA Dark Matter Meeting)

The UCLA Dark Matter Meeting is my favorite meeting, period. It takes place every other year, usually at the Marriott Marina del Rey right near Venice Beach, but this year on UCLA campus. Last week almost two hundred people congregated, both theorists and experimentalists, to discuss our latest attempts to solve the dark matter problem. Most of the mass in galaxies, including our Milky Way, is not comprised of ordinary atomic material, but instead of as yet unidentified dark matter. The goal of dark matter hunters is to resolve this puzzle. Experimentalist Dave Cline of the UCLA Physics Department runs the dark matter meeting, with talks often running from dawn till midnight. Every session goes way over, but somehow the disorganization leads everybody to have lots of discussion, interaction between theorists and experimentalists, and even more cocktails. It is, quite simply, the best meeting. I am usually on the organizing committee, and cannot resist sending in lots of names of people who will give great talks and add to the fun.

Last week at the meeting we were treated to multiple hints of potential dark matter signals. To me the most interesting were the talks by Dan Hooper and Tim Linden on the observations of excess high-energy photons — gamma-rays — coming from the Central Milky Way, possibly produced by annihilating WIMP dark matter particles. (See this arxiv paper.) Weakly Interacting Massive Particles (WIMPs) are to my mind the best dark matter candidates. Since they are their own antiparticles, they annihilate among themselves whenever they encounter one another. The Center of the Milky Way has a large concentration of dark matter, so that a lot of this annihilation could be going on. The end products of the annihilation would include exactly the gamma-rays found by Hooper and his collaborators. They searched the data from the FERMI satellite, the premier gamma-ray mission (funded by NASA and DoE as well as various European agencies), for hints of excess gamma-rays. They found a clear excess extending to about 10 angular degrees from the Galactic Center. This excess could be caused by WIMPs weighing about 30 GeV, or 30 proton masses. Their paper called these results “a compelling case for annihilating dark matter.” After the talk, Dave Cline decided to put out a press release from the meeting, and asked the opinion of us organizers. Most significantly, Elliott Bloom, a leader of the FERMI satellite that obtained the data, had no objection, though the FERMI team itself has as yet issued no statement.

Many putative dark matter signals have come and gone, and we will have to see if this one holds up. Two years ago the 130 GeV line was all the rage — gamma-rays of 130 GeV energy that were tentatively observed in the FERMI data towards the Galactic Center. (Slides from Andrea Albert’s talk.) This line, originally proposed by Stockholm’s Lars Bergstrom, would have been the expectation if two WIMPs annihilated directly to photons. People puzzled over some anomalies of the data, but with improved statistics there isn’t much evidence left for the line. The question is, will the 30 GeV WIMP suffer the same fate? As further data come in from the FERMI satellite we will find out.

What about direct detection of WIMPs? Laboratory experiments deep underground, in abandoned mines or underneath mountains, have been searching for direct signals of astrophysical WIMPs striking nuclei in the detectors. At the meeting the SuperCDMS experiment hammered on light WIMP dark matter with negative results. The possibility of light dark matter, that was so popular recently, remains puzzling. 10 GeV dark matter seemed to be detected in many underground laboratory experiments: DAMA, CoGeNT, CRESST, and in April 2013 even CDMS in their silicon detectors. Yet other experiments, XENON and LUX, saw no events, in drastic tension with the positive signals. (I told Rick Gaitskell, a leader of the LUX experiment, that I was very unhappy with him for these results, but as he pointed out, we can’t argue with nature.) Last week at the conference, SuperCMDS, the most recent incarnation of the CDMS experiment, looked to much lower energies and again saw nothing. (Slides from Lauren Hsu’s talk.) The question remains: are we comparing apples and oranges? These detectors are made of a wide variety of types of nuclei and we don’t know how to relate the results. Wick Haxton’s talk surprised me by discussion of nuclear physics uncertainties I hadn’t been aware of, that in principle could reconcile all the disagreements between experiments, even DAMA and LUX. Most people think that the experimental claims of 10 GeV dark matter are wrong, but I am taking a wait and see attitude.

We also heard about the hints of detection of a completely different dark matter candidate: sterile neutrinos. (Slides from George Fuller’s talk.) In addition to the three known neutrinos of the Standard Model of Particle Physics, there could be another one that doesn’t interact with the standard model. Yet its decay could lead to x-ray lines. Two separate groups found indications of lines in data from the Chandra and XMM-Newton space satellites that would be consistent with a 7 keV neutrino (7 millionths of a proton mass). Could it be that there is more than one type of dark matter particle? Sure, why not?

On the last evening of the meeting, a number of us went to the Baja Cantina, our favorite spot for margaritas. Rick Gaitskell was smart: he talked us into the $60.00 pitchers, high enough quality that the 6AM alarm clocks the next day (that got many of us out of bed and headed to flights leaving from LAX) didn’t kill us completely. We have such a fun community of dark matter enthusiasts. May we find the stuff soon!

23 Comments

Effective Field Theory and Large-Scale Structure

Been falling behind on my favorite thing to do on the blog: post summaries of my own research papers. Back in October I submitted a paper with two Caltech colleagues, postdoc Stefan Leichenauer and grad student Jason Pollack, on the intriguing intersection of effective field theory (EFT) and cosmological large-scale structure (LSS). Now’s a good time to bring it up, as there’s a great popular-level discussion of the idea by Natalie Wolchover in Quanta.

So what is the connection between EFT and LSS? An effective field theory, as loyal readers know, an “effective field theory” is a way to describe what happens at low energies (or, equivalently, long wavelengths) without having a complete picture of what’s going on at higher energies. In particle physics, we can calculate processes in the Standard Model perfectly well without having a complete picture of grand unification or quantum gravity. It’s not that higher energies are unimportant, it’s just that all of their effects on low-energy physics can be summed up in their contributions to just a handful of measurable parameters.

In cosmology, we consider the evolution of LSS from tiny perturbations at early times to the splendor of galaxies and clusters that we see today. It’s really a story of particles — photons, atoms, dark matter particles — more than a field theory (although of course there’s an even deeper description in which everything is a field theory, but that’s far removed from cosmology). So the right tool is the Boltzmann equation — not the entropy formula that appears on his tombstone, but the equation that tells us how a distribution of particles evolves in phase space. However, the number of particles in the universe is very large indeed, so it’s the most obvious thing in the world to make an approximation by “smoothing” the particle distribution into an effective fluid. That fluid has a density and a velocity, but also has parameters like an effective speed of sound and viscosity. As Leonardo Senatore, one of the pioneers of this approach, says in Quanta, the viscosity of the universe is approximately equal to that of chocolate syrup.

So the goal of the EFT of LSS program (which is still in its infancy, although there is an important prehistory) is to derive the correct theory of the effective cosmological fluid. That is, to determine how all of the complicated churning dynamics at the scales of galaxies and clusters feeds back onto what happens at larger distances where things are relatively smooth and well-behaved. It turns out that this is more than a fun thing for theorists to spend their time with; getting the EFT right lets us describe what happens even at some length scales that are formally “nonlinear,” and therefore would conventionally be thought of as inaccessible to anything but numerical simulations. I really think it’s the way forward for comparing theoretical predictions to the wave of precision data we are blessed with in cosmology.

Here is the abstract for the paper I wrote with Stefan and Jason:

A Consistent Effective Theory of Long-Wavelength Cosmological Perturbations
Sean M. Carroll, Stefan Leichenauer, Jason Pollack

Effective field theory provides a perturbative framework to study the evolution of cosmological large-scale structure. We investigate the underpinnings of this approach, and suggest new ways to compute correlation functions of cosmological observables. We find that, in contrast with quantum field theory, the appropriate effective theory of classical cosmological perturbations involves interactions that are nonlocal in time. We describe an alternative to the usual approach of smoothing the perturbations, based on a path-integral formulation of the renormalization group equations. This technique allows for improved handling of short-distance modes that are perturbatively generated by long-distance interactions.

As useful as the EFT of LSS approach is, our own contribution is mostly on the formalism side of things. (You will search in vain for any nice plots comparing predictions to data in our paper — but do check out the references.) We try to be especially careful in establishing the foundations of the approach, and along the way we show that it’s not really a “field” theory in the conventional sense, as there are interactions that are nonlocal in time (a result also found by Carrasco, Foreman, Green, and Senatore). This is a formal worry, but doesn’t necessarily mean that the theory is badly behaved; one just has to work a bit to understand the time-dependence of the effective coupling constants.

Here is a video from a physics colloquium I gave at NYU on our paper. A colloquium is intermediate in level between a public talk and a technical seminar, so there are some heavy equations at the end but the beginning is pretty motivational. Enjoy!

Colloquium October 24th, 2013 -- Effective Field Theory and Cosmological Large-Scale Structure

8 Comments

Decennial

Almost forgot again — the leap-year thing always gets me. But I’ve now officially been blogging for ten years. Over 2,000 posts, generating over 57,000 comments. I don’t have accurate stats because I’ve moved around a bit, but on the order of ten million visits. Thanks for coming!

Nostalgia buffs are free to check out the archives (by category or month) via buttons on the sidebar, or see the greatest hits page. Here are some of my personal favorites from each of the past ten years:

Here’s to the next decade!

12 Comments
Scroll to Top