You may have read here and there about the genteel discussions concerning the status of string theory within contemporary theoretical physics. We’ve discussed it on CV here, here, and even way back here, and Clifford has hosted a multipart discussion at Asymptotia (I, II, III, IV, V, VI).

We are now very happy to host a guest post by the man who wrote the book, as it were, on string theory — Joe Polchinski of the Kavli Institute for Theoretical Physics at UC Santa Barbara. Joe was asked by *American Scientist* to review Peter Woit’s *Not Even Wrong* and Lee Smolin’s *The Trouble With Physics*. Here is a slightly-modified version of the review, enhanced by footnotes that expand on some more technical points.

————————————————————————————

This is a review/response, written some time ago, that has just appeared in *American Scientist*. A few notes: 1) I did not choose the title, but at least insisted on the question mark so as to invoke Hinchliffe’s rule (if the title is a question, the answer is `no’). 2) Am. Sci. edited my review for style, I have reverted figures of speech that I did not care for. 3) I have added footnotes on some key points. I look forward to comments, unfortunately I will be incommunicado on Dec. 8 and 9.

**All Strung Out?**

Joe Polchinski

**The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next.** Lee Smolin. xxiv + 392 pp. Houghton Mifflin, 2006. $26.

**Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law.** xxi + 291 pp. Basic Books, 2006. $26.95.

The 1970’s were an exhilarating time in particle physics. After decades of effort, theoretical physicists had come to understand the weak and strong nuclear forces and had combined them with the electromagnetic force in the so-called Standard Model. Fresh from this success, they turned to the problem of finding a unified theory, a single principle that would account for all three of these forces and the properties of the various subatomic particles. Some investigators even sought to unify gravity with the other three forces and to resolve the problems that arise when gravity is combined with quantum theory.

The Standard Model is a quantum field theory, in which particles behave as mathematical points, but a small group of theorists explored the possibility that under enough magnification, particles would prove to be oscillating loops or strands of “string.” Although this seemingly odd idea attracted little attention at first, by 1984 it had become apparent that this approach was able to solve some key problems that otherwise seemed insurmountable. Rather suddenly, the attention of many of those working on unification shifted to string theory, and there it has stayed since.

Today, after more than 20 years of concentrated effort, what has been accomplished? What has string theory predicted? Lee Smolin, in *The Trouble With Physics*, and Peter Woit, in *Not Even Wrong*, argue that string theory has largely failed. What is worse, they contend, too many theorists continue to focus their efforts on this idea, monopolizing valuable scientific resources that should be shifted in more promising directions.

Smolin presents the rise and fall of string theory as a morality play. He accurately captures the excitement that theorists felt at the discovery of this unexpected and powerful new idea. But this story, however grippingly told, is more a work of drama than of history. Even the turning point, the first crack in the facade, is based on a myth: Smolin claims that string theorists had predicted that the energy of the vacuum — something often called dark energy — could not be positive and that the surprising 1998 discovery of the accelerating expansion of the universe (which implies the existence of positive dark energy) caused a hasty retreat. There was, in fact, no such prediction [1]. Although his book is for the most part thoroughly referenced, Smolin cites no source on this point. He quotes Edward Witten, but Witten made his comments in a very different context — and three years after the discovery of accelerating expansion. Indeed, the quotation is doubly taken out of context, because at the same meeting at which Witten spoke, his former student Eva Silverstein gave a solution to the problem about which he was so pessimistic. (Contrary to another myth, young string theorists are not so intimidated by their elders.)

As Smolin charts the fall of string theory, he presents further misconceptions. For example, he asserts that a certain key idea of string theory — something called Maldacena duality, the conjectured equivalence between a string theory defined on one space and a quantum field theory defined on the boundary of that space — makes no precise mathematical statements. It certainly does. These statements have been verified by a variety of methods, including computer simulations [2]. He also asserts that the evidence supports only a weak form of this conjecture, without quantum mechanics. In fact, Juan Maldacena’s theory is fully quantum mechanical [3].

A crucial principle, according to Smolin, is background independence — roughly speaking, consistency with Einstein’s insight that the shape of spacetime is dynamical — and Smolin repeatedly criticizes string theory for not having this property. Here he is mistaking an aspect of the mathematical language being used for one of the physics being described. New physical theories are often discovered using a mathematical language that is not the most suitable for them. This mismatch is not surprising, because one is trying to describe something that is different from anything in previous experience. For example, Einstein originally formulated special relativity in language that now seems clumsy, and it was mathematician Hermann Minkowski’s introduction of four-vectors and spacetime that made further progress possible.

In string theory it has always been clear that the physics is background-independent even if the language being used is not, and the search for a more suitable language continues. Indeed (as Smolin belatedly notes), Maldacena duality provides a solution to this problem, one that is unexpected and powerful. The solution is still not complete: One must pin down spacetime on the edges, but in the middle it is free to twist and even tear as it will, and black holes can form and then decay. This need to constrain the edges is connected with a property known as the holographic principle, which appears to be an essential feature of quantum gravity. Extending this principle to spaces with the edges free will require a major new insight. It is possible that the solution to this problem already exists among the alternative approaches that Smolin favors. But his principal candidate (loop quantum gravity) is, as yet, much more background-dependent than the current form of string theory [4].

Much of Smolin’s criticism of string theory deals with its lack of mathematical rigor. But physics is not mathematics. Physicists work by calculation, physical reasoning, modeling and cross-checking more than by proof, and what they can understand is generally much greater than what can be rigorously demonstrated. For example, quantum field theory, which underlies the Standard Model and much else in physics, is notoriously difficult to put on a rigorous foundation. Indeed, much of the interest that mathematicians have in physics, and in string theory in particular, arises not from its rigor but from the opposite: Physicists by their methods can obtain new results whose mathematical underpinning is not obvious. String theorists have a strong sense that they are discovering something, not inventing it. The process is sometimes messy, with unexpected twists and turns (not least the strings themselves!), and rigor is not the main tool.

Woit covers some of the same ground, although his interests are more centered on particle physics and on the connection with mathematics than on the nature of spacetime. His telling is more direct, but it is rather stuffed with detail and jargon, and his criticisms of string theory are simpler and somewhat repetitious.

A major point for Woit is that no one knows exactly what string theory is, because it is specified only through an infinite mathematical series whose sum is ill-defined. This assertion is partly true: With new physical theories there is often a long period between the first insight and the final mathematical form. For quantum field theory, the state of affairs that Woit describes lasted for half a century [5]. In string theory the situation is much better than he suggests, because for 10 years we have had tools (dualities) that give us in many cases a precise definition of the theory. These have led in turn to many new applications of string theory, such as to the quantum mechanics of black holes, and there are hints to a more complete understanding.

But what about the lack of predictions? This is the key question, for Woit, for Smolin and for string theory. Why have the last 20 years been a time of unusually little contact between theory and experiment? The problem is partly on the experimental side: The Standard Model works too well. It takes great time, ingenuity and resources to try to look beyond it, and often what is found is still the Standard Model.

A second challenge was set forth by Max Planck more than a century ago. When one combines the fundamental constants of special relativity, general relativity and quantum mechanics, one finds that they determine a distance scale at which these theories appear to come together: the Planck length of 10^{-33} centimeters. To put this number in perspective, one would have to magnify an atom a billion times to make it the size of a coffee cup, and one would have to magnify the Planck length a trillion trillion times to make it the size of an atom. If we could probe the Planck length directly, we would be able to see the strings and extra dimensions, or whatever else is lurking there, and be done with it. But we cannot do that, and so instead we must look for indirect evidence. And, as was the case with atomic theory, one cannot predict how long such a leap will take.

Smolin addresses the problem of the Planck length (“It is a lie,” he says). Indeed, Planck’s calculation applies to a worst-case scenario. String theorists have identified at least half a dozen ways that new physics might arise at accessible scales [6], and Smolin points to another in the theories that he favors [7], but each of these is a long shot. As far as experiment yet shows, Planck’s challenge stands.

Or it may be that string theory has already made a connection with observation â€” one of immense significance. Positive dark energy is the greatest experimental discovery of the past 30 years regarding the basic laws of physics. Its existence came as a surprise to almost everyone in physics and astronomy, except for a small number, including, in particular, Steven Weinberg.

In the 1980s, Weinberg had been trying to solve the long-standing puzzle of why the density of dark energy is not actually much greater. He argued that if the underlying theory had multiple vacua describing an enormous number of potential universes, it would not only explain why the density of dark energy is not high, but would also predict that it is not zero. Weinberg’s reasoning was contrary to all conventional wisdom, but remarkably his prediction was borne out by observation a decade later.

The connection between string theory and dark energy is still a subject of much controversy, and it may be that Weinberg got the right answer for the wrong reason. However, it may well turn out that he got the right answer for the right reason. If so, it will be one of the great insights in the history of physics, and the multivacuum property of string theory, seemingly one of its main challenges, will, in fact, be just what nature requires.

A second unexpected connection comes from studies carried out using the Relativistic Heavy Ion Collider, a particle accelerator at Brookhaven National Laboratory. This machine smashes together nuclei at high energy to produce a hot, strongly interacting plasma. Physicists have found that some of the properties of this plasma are better modeled (via duality) as a tiny black hole in a space with extra dimensions than as the expected clump of elementary particles in the usual four dimensions of spacetime. The prediction here is again not a sharp one, as the string model works much better than expected. String-theory skeptics could take the point of view that it is just a mathematical spinoff. However, one of the repeated lessons of physics is unity — nature uses a small number of principles in diverse ways. And so the quantum gravity that is manifesting itself in dual form at Brookhaven is likely to be the same one that operates everywhere else in the universe.

A further development over the past few years, as our understanding has deepened, has been the extensive study of the experimental consequences of specific kinds of string theory. Many of these make distinctive predictions for particle physics and cosmology. Most or all of these may well be falsified by experiment (which is, after all, the fate of most new models). The conclusive test of string theory may still be far off, but in the meantime, science proceeds through many small steps.

A central question for both Smolin and Woit is why so many very good scientists continue to work on an idea that has allegedly failed so badly. Both books offer explanations in terms of the sociology of science and the psychology of scientists. These forces do exist, and it is worth reflecting on their possible negative effects, but such influences are not as strong as these authors posit. String theorists include mavericks and contrarians, strong-willed individuals who have made major contributions — not just in string theory but in other parts of physics as well. The borders between string theory and other areas of physics are not closed, and theorists would emigrate if they did not believe that this was the most promising direction in which to invest their time and energies.

In fact, the flow of intellectual talent has been in the other direction: In recent years, leading scientists in particle phenomenology, inflationary cosmology and other fields have found ideas generated by string theory to be useful in their disciplines, just as mathematicians have long done. Many have begun to work with string theorists and have in turn contributed their perspectives to the subject and expanded the view of how string theory relates to nature.

This convergence on an unproven idea is remarkable. Again, it is worth taking a step back and reflecting on whether the net result is the best way to move science forward, and in particular whether young scientists are sufficiently encouraged to think about the big questions of science in new ways. These are important issues — and not simple ones. However, much of what Smolin and Woit attribute to sociology is really a difference of scientific judgment.

In the end, these books fail to capture much of the spirit and logic of string theory. For that, Brian Greene’s *The Elegant Universe * (first published in 1999) or Leonard Susskind’s *The Cosmic Landscape* (2005) do a better job. The interested reader might also look to particle-phenomenologist Lisa Randall’s *Warped Passages* (2005) and cosmologist Alexander Vilenkin’s *Many Worlds in One * (2006) for accounts by two scientists from other fields who have seen a growing convergence between string theory and their ideas about how the cosmos is put together.

*Joseph Polchinski is a professor of physics at the University of California, Santa Barbara, and a permanent member of the Kavli Institute for Theoretical Physics. He is the author of the two-volume text * String Theory *(Cambridge University Press, 1998).*

————————————————————————————

[1] It is obvious that there could have been no such prediction. From 1995-98, string theorists were discovering a host of new nonperturbative tools: dualities, branes, black hole entropy counting, matrix theory, and AdS/CFT duality. These were at the time studied almost exclusively in the context of supersymmetry. The problem of moduli stabilization, necessary for any nonsupersymmetric compactification (and positive energy density states are necessarily nonsupersymmetric) was left for the future; there were no general results or predictions. Page 154 refers to no-go theorems. There was a prominent no-go theorem two years later due to Maldacena and Nunez. However, not only the timing but also the physics is misstated. This paper makes several restrictive assumptions, and gives a long list of well-known papers, some as early as 1986, to which its results simply don’t apply. So this was never a broad constraint on string theory.

[2] On the string theory side, all calculations of anomalous dimensions and correlators represent precise statements about the strong coupling behavior of the gauge theory. However, it is argued on page 282 that the gauge theory is not known to exist. For the purpose of this discussion it is sharpest to focus on the gauge theories in 1+1 and 2+1 dimensions, which were shown by Itzhaki, Maldacena, Sonnenschein, and Yankielowicz to also give background-independent constructions of quantum gravity. These theories are superrenormalizable – their couplings go to zero as powers at short distance — so they are even better-defined than QCD, and one can calculate to arbitrary accuracy on the lattice. Even the supersymmetry is no problem: the lattice breaks it, but because of the superrenormalizability one can calculate explicitly the counterterms needed to restore the symmetry in the continuum limit, and so all the predictions of AdS/CFT can be checked algorithmically.

This has already been done, not by Monte Carlo but by using discrete light-cone quantization, which has the nice property of preserving SUSY and also not paying an extra numerical penalty for large *N*. The present results of Hiller, Pinsky, Salwen, and Trittman are notable. The error bars are still large (but again, the issue is whether there are predictions in principle, not what can be done with today’s technology) but it does appear that the gauge theory Hilbert space, truncated to 3 x 10^{12} states, is in fact describing a graviton moving in a curved spacetime. Possibly less algorithmic, but numerically impressive, is the four-loop calculation of Bern, Czakon, Dixon, Kosower, and Smirnov: the Pade extrapolation to strong coupling agrees with the prediction of AdS/CFT to one or two percent.

[3] The gauge theory is a consistent and fully quantum mechanical theory, so if it contains classical gravity then it is by definition a solution to the problem of unifying Einstein’s theory with quantum mechanics. Moreover, the gravitational field must itself be quantized, because the duality relates gauge theory states to correctly quantized graviton states.

It is very difficult to define a `weak form’ of the duality which accounts for all the successful tests and is not actually the strong form. I am taking the definition here from page 144, which refers to classical supergravity as the lowest approximation, and talks about the duality being true only at this lowest order.

However, to get more background I have looked at the relevant papers by Arnsdorf and Smolin and by Smolin. The central arguments of these papers are wrong. One argument is that AdS/CFT duality cannot describe the bending of light by a gravitational field because there is a dual description with a fixed causal structure. If true, of course, this would invalidate the duality, but it is not. The gauge theory has a fixed causal structure, but signals do not move on null geodesics: there is *refraction*, so signals slow down and bend, and it is this that is dual to the bending of light by a gravitational field. Indeed, this duality between ordinary refraction and gravitational lensing is one of the fascinating maps between gravitation and nongravitational physics that are implied by the duality.

The second argument is that the tests of AdS/CFT duality are consistent with a weaker notion of `conformal induction,’ whereby a boundary theory can be defined from any field theory in AdS space by taking the limit as the correlators approach the boundary. This misses an important point. In general this procedure does not actually define a self-contained field theory on the boundary. Consider a signal in the bulk, which at time *t* is moving toward the boundary so as to reach it at a later time *t’*. According to the definition of conformal induction, the existence of this signal is not encoded in the boundary theory at time *t*, so that theory has no time evolution operator: the state at time *t* does not determine the state at time *t’*. In AdS/CFT the boundary is a true QFT, with a time evolution operator, and the signal is encoded even at time *t*. As a rough model of how this can work, imagine that every one-particle state in the bulk maps to a two-particle state in the boundary, where the separation of the particles plays the role of the radial coordinate: as they come close together the bulk particle move to the boundary, as they separate it moves away. Something like this happens even in real QCD, in the contexts of color transparency and BFKL diffusion.

[4] I am referring here to the problem of the constraints. Until these are solved, one does not really have background independence: there is an enormous Hilbert space, most of which is unphysical. In AdS/CFT, not only the bulk spacetime but also the bulk diffeomorphism group are *emergent*: the CFT fields are completely invariant under the bulk diffeomorphisms (this is also what happens in the much more common phenomenon of emergent gauge symmetry). In effect the constraints are already solved. One of the lessons of duality is that only the physical information is common to the different descriptions, while the extra gauge structure is not, it is an artifact of language not physics. (The CFT has its own SU(N) gauge invariance, but here it is straightforward to write down invariant objects.)

[5] I am counting from the mid-20’s, when the commutation relations for the electromagnetic field were first written down, to the mid-70’s when lattice gauge theory gave the first reasonably complete definition of a QFT, and when nonperturbative effects began to be understood systematically.

[6] The ones that came to mind were modifications of the gravitational force law on laboratory scales, strings, black holes, and extra dimensions at particle accelerators, cosmic superstrings, and trans-Planckian corrections to the CMB. One might also count more specific cosmic scenarios like DBI inflation, pre-Big-Bang cosmology, the ekpyrotic universe, and brane gas cosmologies.

[7] I have a question about violation of Lorentz invariance, perhaps this is the place to ask it. In the case of the four-Fermi theory of the weak interaction, one could have solved the UV problem in many ways by violating Lorentz invariance, but preservation of Lorentz invariance led almost uniquely to spontaneously broken Yang-Mills theory. Why weren’t Lorentz-breaking cutoffs tried? Because they would have spoiled the success of Lorentz invariance at low energies, through virtual effects. Now, the Standard Model has of order 25 renormalizable parameters, but it would have roughly as many more if Lorentz invariance were not imposed; most of the new LV parameters are known to be zero to high accuracy. So, if your UV theory of gravity violates Lorentz invariance, this should feed down into these low energy LV parameters through virtual effects. Does there exist a framework to calculate this effect? Has it been done?

Pingback: Masterclass - Asymptotia

The link to Smolin in footnote [3] is busted.

Fixed; thanks.

Pingback: Debate on String Theory « The Art of Equations

I just wanted to pass on the links to e-mail interviews of two Indian string theorists: Ashoke Sen and Sunil Mukhi.

Both the interviews appeared just three days ago in the blog of a freelance journalist.

A crucial principle, according to Smolin, is background independenceBackground independence is a fundamental principle, but not the only one. The key lessons from GR are background independence and locality, and the lessons from QFT are locality and QM in the sense of Fock. It is hence natural to assume that QG is based on three pillars:

* Background independence

* Locality

* Quantum theory

All of the major QG contenders fail to satisfy some of these desiderata:

* Perturbative string theory is not background independent.

* Holographic theories, e.g. AdS/CFT, are not local.

* LQG is neither local nor quantum in the sense of Fock quantization.

* ‘t Hooft’s Planck-scale determinism is obviously not quantum. However, it is remarkable that ‘t Hooft seems to be so concerned with locality that he is even willing to consider hidden variables.

A common objection is that local observables do not exist in quantum gravity. This can obviously not be correct, since 25 out of 26 consistent quantum gravities in 2D are local rather than holographic (no-ghost theorem).

Since 1 prediction would be better than 1000 words, let me focus on footnote [6].

It is a list of issues that allowed some contact with phenomenology, and maybe with cosmology. Surely there was a flow of leading phenomenologists and cosmologists towards string theory. It also caused a flow of young string theorists in the opposite direction. But the main effect of this contact was, in my view, that phenomenologists and cosmologists and maybe experimentalists could directly see what string theory can do and what cannot do, and started considering the possibility that “not even wrong” might turn out to be its epitaph.

In this never-ending and oftentimes heated debate, it is truly refreshing to read a review that actually focuses on the physics.

A very nicely balanced review, particularly at the end where the failure of both books to inspire the reader to study strings is deplored! Two brief points, though.

1. On Woit’s problem with the lack of rigor in the mathematics of string theory: when does abject speculation become physics? According to Heisenberg, ‘learned trash’ becomes ‘discovery’ at the time it is experimentally confirmed, and not before that time. The beautiful Einstein-Hilbert field equation of GR (1915) was widely promoted only in 1919

afterbeing tested and having empirical evidence! In the same way, the beautiful Dirac equation was dismissed viciously in 1929 because it had one unphysical solution (antimatter) as well as predicting the electron! Heisenberg wrote:“The saddest chapter of modern physics is and remains the Dirac theory. … I regard the Dirac theory … as learned trash which no one can take seriously.”

(M. Kaku, Einstein’s Cosmos, Phoenix, 2005, p 123.)

Yet

after experimental confirmationhe responded:“I think that this discovery of anti-matter was perhaps the biggest jump of all the big jumps in our century.”

(Ibid, p124.)

Popper is wrong about falsifiability because Archimedes didn’t make falsifiable predictions when he came up with a proof of the law of buoyancy (the facts were already known). Falsifiability is an incomplete criterion for science. You can also prove things by rigorous logic, even if you don’t make checkable predictions. The problem for Woit is that the rigour is missing from string theory.

2. Smolin’s point about loop quantum gravity in his actual detailed Perimeter Institute lectures (has Polchinski seen them?) is that loop quantum gravity is a bridge building exercise between well established QFT methods (path integrals) and the field equation of general relativity. By Ockham’s razor, if there is a way of getting quantum gravity introducing a lot of needless, unpredictive complexity (M-theory), then that science should choose the simplest theory which fits the empirical facts. The celebration of M-theory is way premature, and is drowning out every alternative with noise, particularly where there are alleged factual predictions (Tony Smith was censored off arXiv for one, and I’m censored for something completely different). I may be wrong over my ideas, but the evidence stands and won’t be investigated or checked until M-theory is defended less rigorously than now!

“5] I am counting from the mid-20’s, when the commutation relations for the electromagnetic field were first written down, to the mid-70’s when lattice gauge theory gave the first reasonably complete definition of a QFT, and when nonperturbative effects began to be understood systematically.”

The problem with this historical comparison is that QFT had a great number of experimental successes, verifications, whatever you want to call them, along the way.

While there are certainly experimental signatures accessible to us in the near future which would require string theory as the underlying explanation, there is no experiment that can rule out string theory. It is the string theorist who might decide to give up the quest on her own, there is nothing Nature can tell her that would convince her. I think this is a situation with no precedent in science.

I think Woit might even withdraw his book if there was even one clear answer to the question – what experiment with such and such results would convince one that string theory does not apply to nature? Of course, the same question needs to be applied to all the other theories out there as well.

Also, my dumb question of the day – in the AdS/CFT correspondence, can the classical limit be taken on each side of the correspondence, while preserving the correspondence?

The question is “inspired” by the thoughts that

a. we undeniably live in a world with a classical limit, with classical gravity.

b. the AdS side, with gravity, to be relevant to anything, should have a limit with classical gravity.

c. what is the classical limit of the CFT side if it is QCD-like?

Arun, the classical limit on the gauge theory side is the N_c (number of colors) goes to infinity limit. This is “classical” in a sense (quantum loops are suppressed), but it’s not the same limit one would ordinarily think of as the classical Yang-Mills theory. A quantum theory can have different classical limits.

.

This parragraph clarifies the sociology and confirms the partition of the ArXiV (cond-mat, hep-th, math-ph, gr-qc, hep-ph, quant-ph, etc.). hep-th is not about particle theory; if it were, it should be hep-ph. The criticisms from Smolin and Woit comes frome the belief on a relationship between hep-th and hep-ph, and perhaps even with gr-qc. That relationship could to be claimed back in the seventies, even if at these ages another disciplines (cond-mat for instance) have already decided to have their own theoretical teams.

Pingback: Live in your town- undergraduate arrogance « In reach

Wow. I “invented” brane gas cosmology for a science-fiction story based on what I had read in Zwiebach’s book (and learned in the class from which the book was born). That would have been early 2005, at the latest. I guess I was a little too late and didn’t read widely enough first.

Now, two challenges remain: work in a few hints about “winding modes” to exaggerate my competence even more, and find a publisher daft enough to put the thing on bookshelves.

Pingback: Lubos Motl's Reference Frame

Pingback: WordPress › Login

I would like to contribute another argument (or another way of stating the same argument) against this “AdS/CFT cannot describe bending of light because of fixed causal background in CFT” argument which I trace back to St. Carlip (although in other circumstances): If that were true, you would get a different theory by doing classical GR but with the field redefinition g_mn = eta_mn + dg_mn. Obviously, this is just a change of names (if you keep all orders of dg_mn) but it looks like a theory propagating in the flat background metric eta_mn.

The resolution comes from the fact, that you should only require causality for

gauge invariant observables. And those propagate according to the full metric while dg_mn propagates in the background but is not observable. A similar example is electromagnetism in Coulomb (A_0=0) gauge: There, the gauge field seems to propagate with infinite speed but this is of course an artefact of the gauge choice.Regarding background independence of the formulation of a theory, I would like to mention that usually we do not require this (or the analogue thing) for gauge theories: There purists would require that the theory is expressed only in terms of gauge invariant observables (without mentioning a background i.e. a gauge around which A_m is the expansion): -1/4 tr(F^2) is no good as F is not gauge invariant. The ‘proper’ way of doing it is in terms of Wilson lines but this does not make the theory any prettier. The other advantage of the A’s is that they come form a linear (actually: affine) space while the gauge invariant, background independent observables come from a much more complicated space.

The theory is the same, it is just so much more convenient to use the language of the A’s. So why not allow the similar thing in the case of gravity?

There seems to be an unending debate on this topic. The trouble may be that we can not base gravitational phenomena on mass because we have no idea of the inherent, essential characteristics of mass that would cause it to either attract other mass or to warp space. Similarly, the Scholastics who endlessly debated among themselves about epicycles etc had no idea of the inherent, essential properties that the earth possessed that would cause all the objects in the sky to rotate around it in a 24-hour period. So can gravitational phenomena be put on a more sound footing as was done in the past with a theory built on an unsound premise? Check out Check out.

I wanted to answer the question the question posed in [7].

In short, this is a significant problem for any theory that predicts Lorentz violation. There is no known phenomenon that could take strong Lorentz violations at high energies (i.e. a Lorentz-violating cutoff) and weaken them at low energies enough to be compatible with experimental bounds. Virtual particles with momenta near the cutoff make large contributions to low-scale Lorentz violation, which are suppressed only by powers of the coupling constant and possibly logarithms of ratios of scales.

The most explicit calculation of this that has been published is, I believe, in Collins, et al. Phys. Rev. Lett. 93, 191301 (2004). They take a Lorentz-violating cutoff and show how it affects one low-energy function. No one has published a more general analysis of how this works. (I myself have thought about doing it–taking a theory with no Lorentz violation in the Lagrangian but a Lorentz-violating regulator and seeing how the Lorentz violation shows up in the Lagrangian of the low-energy effective field theory–however, I have not gotten around to it.)

That statement needs some caveats.

Lattice gauge theory is a Lorentz-violating cutoff. However, the residual discrete symmetry group, which is unbroken, is large enough to guarantee that all Lorenz-violating effects are in irrelevant operators that disappear in the continuum limit.

So you are, presumably, talking about Lorentz-violation severe enough that it can creep into relevant or marginal operators.

In any case, LQG doesn’t “predict” Lorentz violation in 4 dimensions. There’s a naive (and totally misguided)

hopethat something like the 3D results of Freidel and Livine might carry over to 4D.Their result is that gravity coupled to matter in 3D is equivalent (upon integrating out gravity) to a matter theory on a non-commutative spacetime. The “trick” of integrating out gravity in 3D, where the gravitational field has no local degrees of freedom, does not (of course) carry over to 4D, where the gravitational field has massless local degrees of freedom.

Nevertheless, hope springs eternal …

Ah, yes, there is that caveat. Obviously, the low-energy effective theory must contain a renormalizable operator with the same symmetries as the the Lorentz violation in the high energy theory. Otherwise, there’s nowhere for the Lorentz violation to go in the low energy theory. Roughly speaking, in four dimensions there are no renormalizable Lorentz-violating operators at low energy with more symmetry than a two-index symmetry tensor (roughly speaking, I say…), while I believe that a lattice regulator has the symmetries of a three-index tensor (plus higher order tensors).

Collins, et al. actually argue that, because no forms of Lorentz violation that correspond to renormalizable operators at low energy are suppressed, we should only be looking experimentally at nonrenormalizable Lorentz-violating operators. Those operators are suppressed at low energies, but this irrelevance has nothing to do with their Lorentz-violating character; it’s just a product of their nonrenormalizability.

Back in 1983, Joe Polchinski (with Wise and Alvarez-Gaume in Nuc. Phys. B221 495-523) found, in the context of “Minimal Low-Energy Supergravity”, that

“… The renormalization group equation … tends to attract the top quark mass toward a fixed point of about 125 GeV

and

It also puts an upper bound of 195 GeV on the mass …”.

This was indeed a prediction of a heavy T-quark, and was in fact NOT in line with then-conventional expectations.

Then-conventional expectations were exemplified by the announcement in 1984 by Carlo Rubbia at CERN that CERN had discovered the T-quark and its mass was about 40 GeV (see for example Nature 310 (12 Jul 84) 97).

It was not until 1987 or so that experimental data began to indicate that the T-quark mass might be over 100 GeV, when ARGUS B-Bbar experiments showed an unexpectedly large mixing parameter.

When the T-quark was observed by Fermilab a few years later, it was found to be in the 125 – 195 GeV mass range predicted by Joe Polchinski.

What puzzles me is that Joe Polchinski did not embrace his prediction as an indication that supergravity renormalization group models must contain important elements of truth, and then embark on a program of studying and modifying such models,

but

instead, he became a member of the herd that has been (and still is) working on conventional superstring theory, which AFAIK has not produced anything like as dramatic a prediction as his T-quark mass prediction.

Of course, it is possible that Joe Polchinski may have been discouraged by difficulties in showing finiteness of supergravity, but it is interesting that such finiteness is still an open question (for example, a UCLA workshop next week is about “Is N=8 Supergravity Finite?”).

It is also possible to contend that supergravity is just a part of superstring theory if it turns out to be a low-energy limit of something superstring-related like M-theory,

but

that position seems to me to be a disingenuous effort to claim for superstring theory the successes of a possibly competing theory, especially since the successful prediction that Joe Polchnski made back in 1983 was based on supergravity structures that were then not thought to be related to superstring-type structures, so superstring theory played no role even in inspiring the ideas used in the successful prediction.

Tony Smith

http://www.valdostamuseum.org/hamsmith/

I have a few scientific questions for Joe, Sean, or another of the string gurus here….

1. Why are positive energy density states necessarily nonsupersymmetric?

2. If Lorentz invariance is exact, then the fine-scale structure of spacetime cannot be latticelike, so what does happen at the Planck scale in theories that preserve this symmetry all the way down?

3. In the book, Lee argues that Weinberg’s anthropic prediction for lambda was way off, if all parameters (not just lambda) are allowed to vary over the ensemble. Is that a fair objection?

George

Pingback: Not Even Wrong » Blog Archive » Polchinski Review at American Scientist and Cosmic Variance