*Cross-posted to Sarah Kavassalis’s blog, The Language of Bad Physics.*

A few weeks ago there was a bit of media excitement about a somewhat surprising experimental result. Observations of quasar spectra indicated that the fine structure constant, the parameter in physics that describes the strength of electromagnetism, seems to be slightly different on one side of the universe than on the other. The preprint is here.

Remarkable, if true. The fine structure constant, usually denoted α, is one of the most basic parameters in all of physics, and it’s a big deal if it’s not really constant. But how likely is it to be true? This is the right place to trot out the old “extraordinary claims require extraordinary evidence” chestnut. It’s certainly an extraordinary claim, but the evidence doesn’t really live up to that standard. Maybe further observations will reveal truly extraordinary evidence, but there’s no reason to get excited quite yet.

Chad Orzel does a great job of explaining why an experimentalist should be skeptical of this result. It comes down to the figure below: a map of the observed quasars on the sky, where red indicates that the inferred value of α is slightly lower than expected, and blue indicates that it’s slightly higher. As Chad points out, the big red points are mostly circles, while the big blue points are mostly squares. That’s rather significant, because the two shapes represent different telescopes: circles are Keck data, while squares are from the VLT (“Very Large Telescope”). Slightly suspicious that most of the difference comes from data collected by different instruments.

But from a completely separate angle, there is also good reason for theorists to be skeptical, which is what I wanted to talk about. Theoretical considerations will always be trumped by rock-solid data, but when the data are less firm, it makes sense to take account of what we already think we know about how physics works.

The crucial idea here is the notion of a scalar field. That’s just fancy physics-speak for a quantity which takes on a unique numerical value at every point in spacetime. In quantum field theory, scalar fields lead to spinless particles; the Higgs field is a standard example. (Other particles, such as electrons and photons, arise from more complicated geometric objects — spinors and vectors, respectively.)

The fine structure constant is a scalar field. We don’t usually think of it that way, since we usually reserve the term “field” for something that actually varies from place to place rather than remaining constant, but strictly speaking it’s absolutely true. So, while it would be an amazing and Nobel-worthy result to show that the fine structure constant were varying, it wouldn’t be hard to fit it into the known structure of quantum field theory; you just take a scalar field that is traditionally thought of as constant and allow it to vary from place to place and time to time.

That’s not the whole story, of course, When a field varies from point to point, those variations carry energy. Think of pulling a spring, or twisting a piece of metal. For a scalar field, there are three important contributions to the energy: kinetic energy from the field varying in time, gradient energy from the field varying in space, and potential energy associated with the value of the field at every point, unrelated to how it is changing.

For the fine structure constant, the observations imply that it changes by only a very tiny bit from one end of the universe to the other. So we really wouldn’t expect the gradient energy to be very large, and there’s correspondingly no reason to expect the kinetic energy to matter much.

The potential energy is a different matter. The potential is similar to the familiar example of a ball rolling in a hill; how steep the potential is near its minimum is related to the *mass* of the field. For most scalar fields, like the Higgs field, the potential is extremely steep; this means that if you displace the field from the minimum of its potential by just a bit, it will tend to immediately roll back down. The Higgs is quite massive.

*A priori*, we don’t know ahead of time what the potential should look like; specifying it is part of defining the theory. But quantum field theory gives us clues. At heart, the world is quantum, not classical; the “value” of the scalar field is actually the expectation value of a quantum operator. And such an operator gets contributions from the intrinsic vibrations of all the other fields that it couples to — in this case, every kind of charged particle in the universe. What we actually observe is not the “bare” form of the potential, but the renormalized value, which takes into account the accumulated effects of various forms of virtual particles popping in and out of the quantum vacuum.

The basic effect of renormalization on a scalar field potential is easy to summarize: it makes the mass large. So, if you didn’t know any better, you would expect the potential to be as steep as it could possibly be — probably up near the Planck scale. The Higgs boson probably has a mass of order a hundred times the mass of a proton, which sounds large — but it’s actually a big mystery why it isn’t enormously larger. That’s the hierarchy problem of particle physics.

So what about our friend the fine structure constant? If these observations are correct, the field would have to have an extremely tiny mass — otherwise it wouldn’t vary smoothly over the universe, it would just slosh harmlessly around the bottom of its potential. Plugging in numbers, we find that the mass has to be something like 10^{-42} GeV or less, where 1 GeV is the mass of the proton. In other words: extremely, mind-bogglingly small.

But there’s no known reason for the mass of the scalar field underlying the fine structure constant to be anywhere near that small. This was established in some detail by Banks, Dine, and Douglas. They affirmed our intuition, that a tiny change in the fine structure constant should be associated with a huge change in potential energy.

Now, there are loopholes — there are always loopholes. In this case, you could possibly prevent those quantum fluctuations from renormalizing your scalar-field potential simply by shielding the field from interactions with other fields. That is, you can impose a symmetry that forbids the field from coupling to other forms of matter, or only lets it couple in certain very precise ways; then you could at least imagine keeping the mass small. That’s essentially the strategy behind the supersymmetric solution to the hierarchy problem.

Problem is, that route is a complete failure when we turn to the fine structure constant, for a very basic reason: we can’t prevent it from coupling to other fields, it’s the parameter that governs the strength of electromagnetism! So like it or not, it will couple to the electromagnetic field and all charged particles in nature. I talked about this in one of my own papers from a few years ago. I was thinking about time-dependent scalars, not spatially-varying ones, but the principles are precisely the same.

That’s why theorists are skeptical of this claimed result. Not that it’s impossible; if the data stand up, it will present a serious challenge to our theoretical prejudices, but that will doubtless goad theorists into being more clever than usual in trying to explain it. Rather, the point is that we have good reasons to suspect that the fine structure constant really is constant; it’s not just a fifty-fifty kind of choice. And given those good reasons, we need really good data to change our minds. That’s not what we have yet — but what we have is certainly more than enough motivation to keep searching.

Very nice explanation that describes the nature of alpha in a way I hadn’t encountered before. What are the quanta of alpha in the current SM, or are they simply ignored (or, rather, trivial) given that in the SM it’s a true constant?

There aren’t any quanta, because there aren’t different energy levels (if it’s strictly constant). If it varies, there will be ultralight spinless bosons, which can mediate long-range forces.

Pingback: Tweets that mention The Fine Structure Constant is Probably Constant | Cosmic Variance | Discover Magazine -- Topsy.com

Yeah the excitement in the media over this result really isn’t reflected in the physics community. It looks odd on the spires entry that there’s a prominent link to an economist article, which I’ve never noticed on spires before, and just 4 citations (though admittedly it’s still very new) . Not that I’m really criticising the media for getting excited, it is a new result and as you say would be a huge event if confirmed.

Sean, that is the best explanation of a very complicated topic to a non-expert I’ve ever read. Bravo! It’s good writing, of course, but also provides some digressions to add material for those that understand them, links (both hyper and conceptual) to important topics, and remarkably clear explanation. One super-duper part:

“you can impose a symmetry that forbids the field from coupling to other forms of matter … That’s essentially the strategy behind the supersymmetric solution to the hierarchy problem.”

Now I understand super-symmetry! Thanks.

More bad statistics. If you carefully follow the statistics in the paper the logic runs as such:

If we assume that alpha depends on distance, then we show that the dependence has a statistically significant value of …

What is completely unmentioned is the simpler test of:

If we have the null hypothesis that alpha does not depend on distance, then we cannot reject the null hypothesis.

This boils down to a very common misinterpretation of regression results: It is popularly assumed that statistically significant regression coefficients indicate a relationship is present in the data. This is not correct. Every regression technique assumes that a correlation exists a priori in the data. The statistical confidence only indicates the values of the coefficients for which you have some degree of confidence, again assuming a priori a correlation exists in the data. This is precisely how the mathematical theory of regression is developed, there is no alternative interpretation.

Any statement involving correlation coefficients should always be rephrased to read: If we assume a correlation exists in our data, then we find the value of the correlation is X with confidence Y.

P.S. Experimentally this boils down to having the model for the dependence of alpha assumed in the parper not properly encapsulating the null hypothesis of no dependence. In particular, as pointed out earlier, the variance in the null hypothesis may very well be much larger than what is assumed in the model.

Interesting post. I know that alpha is related to the coupling constant e in QED. So how would the Feynman diagrams for QED change with this new particle? Just an extra vertex with two Fermions, a photon, and a fine structure particle? Or would there be a vertex with multiple fine structure particles?

I understand this probably doesn’t describe the real world, but I’m curious none the less.

Repeat: Feel free to delete the above. My VEVs got eaten as HTML code.

Hi TimG,

It would not change QED that much.

Just like we have the masses of the fermions coming from a VEV of the Higgs, we would still write the same Feynman diagrams except with a VEV and coupling terms.

The important thing is — just like the Higgs — if I claim that a fermion gets mass from

y V psibar psi

then the only way a V (= Higg’s VEV ) in the equation is via Phi = V + h

In practical terms if your mass term is

m psibar psi = y V psibar psi

that tell you that there is a term

y Phi psibar psi = y (V + h) psibar psi = y V psibar psi + y h psibar psi

which is just

“m” psibar psi + y h psibar psi

So the mass with the Higgs is still the same, but you have predicted a higgs-fermion-fermion interaction with a specific amplitude y ( = “m” / V). So you know you have a whole set of Feynman rules everytime you would have used a mass.

With alpha being the VEV of a scalar field, everytime we saw a field that coupled to alpha we would also predict an additional Fenyman rule with related couplings of all the involved particles to the new scalar field.

http://www.worldsci.org/pdf/ebooks/TransDimensionalUnifiedFieldTheory8.09.pdf

Presumably, however, the new interaction would be non-renormalizable, no?

@James #11:

Probably, if the interaction term is something like phi {bar {psi} }gamma^{mu}A_{mu}psi. But I wouldn’t be surprised if someone clever could find a way to build that term as a low energy limit of something renormalizable (though I don’t see an obvious way to do that, without making the photon a condensate, which would mess up Electroweak).

The initial result on measurement of a putative change in the fine structure constant with time has been around for quite a while (papers by Webb, Murphy etc on the Keck/HIRES data).

The 64 dollar question has always been, would the exact same analysis of the VLT/UVES data yield an equivalent result? Systematics in the fine details of the wavelength calibration of the two spectrographs would probably be different, so this is a useful test.

The paper answers that question. The two datasets do not agree on the evolution of the fine structure constant. This is the point at which most experimentalists, or observers, will get off this particular bus. The authors put another interpretation on the data by suggesting a spatial variation. The interpretation of this choice is left as an exercise for the reader.

I guess my question really is about getting at another question, less obvious than the one I posed: Where does it end? So the coupling is determined by another field with its own particle, if it varies. I guess then that must have its own coupling. Well, what if that varies? What if there’s no such thing as a constant? I’m guessing there has to be an absolute limit to this regression for theoretical reasons, but maybe I’m wrong, and however poorly-motivated such an idea might be, it can’t be ruled out absolutely.

An important rule in life: any time an observational astronomical result involving incredibly subtle effects is published in PhysRev Letters, it’s probably wrong.

@Bruce That’s both hilarious and very accurate. We should maybe call it Bruce’s Law!

Phys Rev Letters enforces rather high standards for acceptance in more “classical” areas of physics, but from time to time they seem to let junk like this get past their filters.

There are three excellent reasons why alpha is constant:

1. The Cosmological Constant (w= -1), is THE best fit to over a decade’s worth of data.

2. Measurements of delta-alpha/alpha have not agreed, after a decade’s worth of data.

3. The CC is precisely given in terms of the fine structure constant & the mass of the electron as,

lp^2(Lambda) = {lp/R}^6 => Lambda = 1.362 x10^-52/m^2 or 4.09 Gev/m^3, where

R = alpha*electron Compton wavelength, & lp is the Planck length.

Thus the mild-mannered electron would seem to gauge the Hubble acceleration of the universe through QED.

I believe there is another very interesting recent developement concerning the fine structure constant. I have derived it from a purely deductive line of reasoning using a dimenensional method. This is something that physics has been trying to do for many years. You can view this deivation at http://www.vixra.org/abs/1008.0007. It vindicates Eddingtons view that such a derivation might be possible. It means that one must add two more variables to the list of Plancks system of natural units.

I know I shouldn’t bless vixra with traffic, but sometimes it’s worth it for a glimpse into the mind of the bizarre.

Sean, out of curiosity, suppose one of the loopholes applied and alpha did vary. What would be the implied omega_alpha from its variations?

George

Pingback: 20 October 2010 (pm) « blueollie

Overview article from physorg.com briefly describes comments by both Sean and Chad Orzel.

http://www.physorg.com/news/2010-10-evidence-varying-fine-structure-constant.html

Very nicely put:

“Theoretical considerations will always be trumped by rock-solid data, but when the data are less firm, it makes sense to take account of what we already think we know about how physics works.”

I’d never thought about this with such clarity. Does the theory overturn the data, or does the data overturn the theory? It depends upon your assessment of which is more trustworthy. There are general heuristics to govern such an assessment. However I’m not sure the heuristics themselves are sufficiently mature to produce solid guidelines.

As in, “the situation is such-and-such. As such, the best fit theory is more reliable and trumps contradictory data”. An unambiguous, truly scientific decision making process.

I should say that I’m not saying that PRL is anything other than an outstanding physics journal – just that the editors are not astronomers, and that even if they do select reviewers who are (does PRL use reviewers who haven’t published in the journal themselves?) the odds of them knowing exactly the right referee to spot observational / instrumental errors are lower than they would be for an astronomy journal.