Math

The Biggest Ideas in the Universe | 14. Symmetry

Symmetry is kind of a big deal in physics — big enough that the magazine jointly published by the SLAC and Fermilab accelerator laboratories is simply called symmetry. Symmetry appears in a variety of contexts, but before we dive into them, we have to understand what “symmetry” actually means. Which is what we do in this video, where we explain the basic ideas of what mathematicians call “group theory.” By the end you’ll know exactly what is meant, for example, by “SU(3)xSU(2)xU(1).”

The Biggest Ideas in the Universe | 14. Symmetry

And here is the associated Q&A video:

The Biggest Ideas in the Universe | Q&A 14 - Symmetry

The Biggest Ideas in the Universe | 14. Symmetry Read More »

13 Comments

Atiyah and the Fine-Structure Constant

Sir Michael Atiyah, one of the world’s greatest living mathematicians, has proposed a derivation of α, the fine-structure constant of quantum electrodynamics. A preprint is here. The math here is not my forte, but from the theoretical-physics point of view, this seems misguided to me.

(He’s also proposed a proof of the Riemann conjecture, I have zero insight to give there.)

Caveat: Michael Atiyah is a smart cookie and has accomplished way more than I ever will. It’s certainly possible that, despite the considerations I mention here, he’s somehow onto something, and if so I’ll join in the general celebration. But I honestly think what I’m saying here is on the right track.

In quantum electrodynamics (QED), α tells us the strength of the electromagnetic interaction. Numerically it’s approximately 1/137. If it were larger, electromagnetism would be stronger, atoms would be smaller, etc; and inversely if it were smaller. It’s the number that tells us the overall strength of QED interactions between electrons and photons, as calculated by diagrams like these.
As Atiyah notes, in some sense α is a fundamental dimensionless numerical quantity like e or π. As such it is tempting to try to “derive” its value from some deeper principles. Arthur Eddington famously tried to derive exactly 1/137, but failed; Atiyah cites him approvingly.

But to a modern physicist, this seems like a misguided quest. First, because renormalization theory teaches us that α isn’t really a number at all; it’s a function. In particular, it’s a function of the total amount of momentum involved in the interaction you are considering. Essentially, the strength of electromagnetism is slightly different for processes happening at different energies. Atiyah isn’t even trying to derive a function, just a number.

This is basically the objection given by Sabine Hossenfelder. But to be as charitable as possible, I don’t think it’s absolutely a knock-down objection. There is a limit we can take as the momentum goes to zero, at which point α is a single number. Atiyah mentions nothing about this, which should give us skepticism that he’s on the right track, but it’s conceivable.

More importantly, I think, is the fact that α isn’t really fundamental at all. The Feynman diagrams we drew above are the simple ones, but to any given process there are also much more complicated ones, e.g.

And in fact, the total answer we get depends not only on the properties of electrons and photons, but on all of the other particles that could appear as virtual particles in these complicated diagrams. So what you and I measure as the fine-structure constant actually depends on things like the mass of the top quark and the coupling of the Higgs boson. Again, nowhere to be found in Atiyah’s paper.

Most importantly, in my mind, is that not only is α not fundamental, QED itself is not fundamental. It’s possible that the strong, weak, and electromagnetic forces are combined into some Grand Unified theory, but we honestly don’t know at this point. However, we do know, thanks to Weinberg and Salam, that the weak and electromagnetic forces are unified into the electroweak theory. In QED, α is related to the “elementary electric charge” e by the simple formula α = e2/4π. (I’ve set annoying things like Planck’s constant and the speed of light equal to one. And note that this e has nothing to do with the base of natural logarithms, e = 2.71828.) So if you’re “deriving” α, you’re really deriving e.

But e is absolutely not fundamental. In the electroweak theory, we have two coupling constants, g and g’ (for “weak isospin” and “weak hypercharge,” if you must know). There is also a “weak mixing angle” or “Weinberg angle” θW relating how the original gauge bosons get projected onto the photon and W/Z bosons after spontaneous symmetry breaking. In terms of these, we have a formula for the elementary electric charge: e = g sinθW. The elementary electric charge isn’t one of the basic ingredients of nature; it’s just something we observe fairly directly at low energies, after a bunch of complicated stuff happens at higher energies.

Not a whit of this appears in Atiyah’s paper. Indeed, as far as I can tell, there’s nothing in there about electromagnetism or QED; it just seems to be a way to calculate a number that is close enough to the measured value of α that he could plausibly claim it’s exactly right. (Though skepticism has been raised by people trying to reproduce his numerical result.) I couldn’t see any physical motivation for the fine-structure constant to have this particular value

These are not arguments why Atiyah’s particular derivation is wrong; they’re arguments why no such derivation should ever be possible. α isn’t the kind of thing for which we should expect to be able to derive a fundamental formula, it’s a messy low-energy manifestation of a lot of complicated inputs. It would be like trying to derive a fundamental formula for the average temperature in Los Angeles.

Again, I could be wrong about this. It’s possible that, despite all the reasons why we should expect α to be a messy combination of many different inputs, some mathematically elegant formula is secretly behind it all. But knowing what we know now, I wouldn’t bet on it.

Atiyah and the Fine-Structure Constant Read More »

32 Comments

Thanksgiving

This year we give thanks for an area of mathematics that has become completely indispensable to modern theoretical physics: Riemannian Geometry. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, Landauer’s Principle, and the Fourier Transform. Ten years of giving thanks!)

Now, the thing everyone has been giving thanks for over the last few days is Albert Einstein’s general theory of relativity, which by some measures was introduced to the world exactly one hundred years ago yesterday. But we don’t want to be everybody, and besides we’re a day late. So it makes sense to honor the epochal advance in mathematics that directly enabled Einstein’s epochal advance in our understanding of spacetime.

Highly popularized accounts of the history of non-Euclidean geometry often give short shrift to Riemann, for reasons I don’t quite understand. You know the basic story: Euclid showed that geometry could be axiomatized on the basis of a few simple postulates, but one of them (the infamous Fifth Postulate) seemed just a bit less natural than the others. That’s the parallel postulate, which has been employed by generations of high-school geometry teachers to torture their students by challenging them to “prove” it. (Mine did, anyway.)

It can’t be proved, and indeed it’s not even necessarily true. In the ordinary flat geometry of a tabletop, initially parallel lines remain parallel forever, and Euclidean geometry is the name of the game. But we can imagine surfaces on which initially parallel lines diverge, such as a saddle, or ones on which they begin to come together, such as a sphere. In those contexts it is appropriate to replace the parallel postulate with something else, and we end up with non-Euclidean geometry.

non-euclidean-geometry1

Historically, this was first carried out by Hungarian mathematician János Bolyai and the Russian mathematician Nikolai Lobachevsky, both of whom developed the hyperbolic (saddle-shaped) form of the alternative theory. Actually, while Bolyai and Lobachevsky were the first to publish, much of the theory had previously been worked out by the great Carl Friedrich Gauss, who was an incredibly influential mathematician but not very good about getting his results into print.

The new geometry developed by Bolyai and Lobachevsky described what we would now call “spaces of constant negative curvature.” Such a space is curved, but in precisely the same way at every point; there is no difference between what’s happening at one point in the space and what’s happening anywhere else, just as had been the case for Euclid’s tabletop geometry.

Real geometries, as takes only a moment to visualize, can be a lot more complicated than that. Surfaces or solids can twist and turn in all sorts of ways. Gauss thought about how to deal with this problem, and came up with some techniques that could characterize a two-dimensional curved surface embedded in a three-dimensional Euclidean space. Which is pretty great, but falls far short of the full generality that mathematicians are known to crave.

Georg_Friedrich_Bernhard_Riemann.jpeg Fortunately Gauss had a brilliant and accomplished apprentice: his student Bernard Riemann. (Riemann was supposed to be studying theology, but he became entranced by one of Gauss’s lectures, and never looked back.) In 1853, Riemann was coming up for Habilitation, a German degree that is even higher than the Ph.D. He suggested a number of possible dissertation topics to his advisor Gauss, who (so the story goes) chose the one that Riemann thought was the most boring: the foundations of geometry. The next year, he presented his paper, “On the hypotheses which underlie geometry,” which laid out what we now call Riemannian geometry.

With this one paper on a subject he professed not to be all that interested in, Riemann (who also made incredible contributions to analysis and number theory) provided everything you need to understand the geometry of a space of arbitrary numbers of dimensions, with an arbitrary amount of curvature at any point in the space. It was as if Bolyai and Lobachevsky had invented the abacus, Gauss came up with the pocket calculator, and Riemann had turned around a built a powerful supercomputer.

Like many great works of mathematics, a lot of new superstructure had to be built up along the way. A subtle but brilliant part of Riemann’s work is that he didn’t start with a larger space (like the three-dimensional almost-Euclidean world around us) and imagine smaller spaces embedded with it. Rather, he considered the intrinsic geometry of a space, or how it would look “from the inside,” whether or not there was any larger space at all.

Next, Riemann needed a tool to handle a simple but frustrating fact of life: “curvature” is not a single number, but a way of characterizing many questions one could possibly ask about the geometry of a space. What you need, really, are tensors, which gather a set of numbers together in one elegant mathematical package. Tensor analysis as such didn’t really exist at the time, not being fully developed until 1890, but Riemann was able to use some bits and pieces of the theory that had been developed by Gauss.

Finally and most importantly, Riemann grasped that all the facts about the geometry of a space could be encoded in a simple quantity: the distance along any curve we might want to draw through the space. He showed how that distance could be written in terms of a special tensor, called the metric. You give me segment along a curve inside the space you’re interested in, the metric lets me calculate how long it is. This simple object, Riemann showed, could ultimately be used to answer any query you might have about the shape of a space — the length of curves, of course, but also the area of surfaces and volume of regions, the shortest-distance path between two fixed points, where you go if you keep marching “forward” in the space, the sum of the angles inside a triangle, and so on.

Unfortunately, the geometric information implied by the metric is only revealed when you follow how the metric changes along a curve or on some surface. What Riemann wanted was a single tensor that would tell you everything you needed to know about the curvature at each point in its own right, without having to consider curves or surfaces. So he showed how that could be done, by taking appropriate derivatives of the metric, giving us what we now call the Riemann curvature tensor. Here is the formula for it:

riemann

This isn’t the place to explain the whole thing, but I can recommend some spiffy lecture notes, including a very short version, or the longer and sexier textbook. From this he deduced several interesting features about curvature. For example, the intrinsic curvature of a one-dimensional space (a line or curve) is alway precisely zero. Its extrinsic curvature — how it is embedded in some larger space — can be complicated, but to a tiny one-dimensional being, all spaces have the same geometry. For two-dimensional spaces there is a single function that characterizes the curvature at each point; in three dimensions you need six numbers, in four you need twenty, and it goes up from there.

There were more developments in store for Riemannian geometry, of course, associated with names that are attached to various tensors and related symbols: Christoffel, Ricci, Levi-Civita, Cartan. But to a remarkable degree, when Albert Einstein needed the right mathematics to describe his new idea of dynamical spacetime, Riemann had bequeathed it to him in a plug-and-play form. Add the word “time” everywhere we’ve said “space,” introduce some annoying minus signs because time and space really aren’t precisely equivalent, and otherwise the geometry that Riemann invented is the same we use today to describe how the universe works.

Riemann died of tuberculosis before he reached the age of forty. He didn’t do bad for such a young guy; you know you’ve made it when you not only have a Wikipedia page for yourself, but a separate (long) Wikipedia page for the list of things named after you. We can all be thankful that Riemann’s genius allowed him to grasp the tricky geometry of curved spaces several decades before Einstein would put it to use in the most beautiful physical theory ever invented.

E = mc^2: How Einstein’s theory of relativity changed everything

Thanksgiving Read More »

15 Comments

Thanksgiving

This year we give thanks for a technique that is central to both physics and mathematics: the Fourier transform. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, and Landauer’s Principle.)

Let’s say you want to locate a point in space — for simplicity, on a two-dimensional plane. You could choose a coordinate system (x, y), and then specify the values of those coordinates to pick out your point: (x, y) = (1, 3).

axes-rotate

But someone else might want to locate the same point, but they want to use a different coordinate system. That’s fine; points are real, but coordinate systems are just convenient fictions. So your friend uses coordinates (u, v) instead of (x, y). Fortunately, you know the relationship between the two systems: in this case, it’s u = y+x, v = y-x. The new coordinates are rotated (and scaled) with respect to the old ones, and now the point is represented as (u, v) = (4, 2).

Fourier transforms are just a fancy version of changes of coordinates. The difference is that, instead of coordinates on a two-dimensional space, we’re talking about coordinates on an infinite-dimensional space: the space of all functions. (And for technical reasons, Fourier transforms naturally live in the world of complex functions, where the value of the function at any point is a complex number.)

Think of it this way. To specify some function f(x), we give the value of the function f for every value of the variable x. In principle, an infinite number of numbers. But deep down, it’s not that different from giving the location of our point in the plane, which was just two numbers. We can certainly imagine taking the information contained in f(x) and expressing it in a different way, by “rotating the axes.”

That’s what a Fourier transform is. It’s a way of specifying a function that, instead of telling you the value of the function at each point, tells you the amount of variation at each wavelength. Just as we have a formula for switching between (u, v) and (x, y), there are formulas for switching between a function f(x) and its Fourier transform f(ω):

f(\omega) = \frac{1}{\sqrt{2\pi}} \int dx f(x) e^{-i\omega x}
lf(x) = \frac{1}{\sqrt{2\pi}} \int d\omega f(\omega) e^{i\omega x}.

Absorbing those formulas isn’t necessary to get the basic idea. If the function itself looks like a sine wave, it has a specific wavelength, and the Fourier transform is just a delta function (infinity at that particular wavelength, zero everywhere else). If the function is periodic but a bit more complicated, it might have just a few Fourier components.

MIT researchers showing how sine waves can combine to make a square-ish wave.
MIT researchers showing how sine waves can combine to make a square-ish wave.

In general, the Fourier transform f(ω) gives you “the amount of the original function that is periodic with period 2πω.” This is sometimes called the “frequency domain,” since there are obvious applications to signal processing, where we might want to take a signal that has an intensity that varies with time and pick out the relative strength of different frequencies. (Your eyes and ears do this automatically, when they decompose light into colors and sound into pitches. They’re just taking Fourier transforms.) Frequency, of course, is the inverse of wavelength, so it’s equally good to think of the Fourier transform as describing the “length domain.” A cosmologist who studies the large-scale distribution of galaxies will naturally take the Fourier transform of their positions to construct the power spectrum, revealing how much structure there is at different scales.

microcontrollers_fft_example

To my (biased) way of thinking, where Fourier transforms really come into their own is in quantum field theory. QFT tells us that the world is fundamentally made of waves, not particles, and it is extremely convenient to think about those waves by taking their Fourier transforms. (It is literally one of the first things one is told to do in any introduction to QFT.)

But it’s not just convenient, it’s a worldview-changing move. One way of characterizing Ken Wilson’s momentous achievement is to say “physics is organized by length scale.” Phenomena at high masses or energies are associated with short wavelengths, where our low-energy long-wavelength instruments cannot probe. (We need giant machines like the Large Hadron Collider to create high energies, because what we are really curious about are short distances.) But we can construct a perfectly good effective theory of just the wavelengths longer than a certain size — whatever size it is that our theoretical picture can describe. As physics progresses, we bring smaller and smaller length scales under the umbrella of our understanding.

Without Fourier transforms, this entire way of thinking would be inaccessible. We should be very thankful for them — as long as we use them wisely.

Credit: xkcd.

Note that Joseph Fourier, inventor of the transform, is not the same as Charles Fourier, utopian philosopher. Joseph, in addition to his work in math and physics, invented the idea of the greenhouse effect. Sadly that’s not something we should be thankful for right now.

Thanksgiving Read More »

26 Comments

Discovering Tesseracts

I still haven’t seen Interstellar yet, but here’s a great interview with Kip Thorne about the movie-making process and what he thinks of the final product. (For a very different view, see Phil Plait [update: now partly recanted].)

tesseract One of the things Kip talks about is that the film refers to the concept of a tesseract, which he thought was fun. A tesseract is a four-dimensional version of a cube; you can’t draw it faithfully in two dimensions, but with a little imagination you can get the idea from the picture on the right. Kip mentions that he first heard of the concept of a tesseract in George Gamow’s classic book One, Two, Three… Infinity. Which made me feel momentarily proud, because I remember reading about it there, too — and only later did I find out that many (presumably less sophisticated) people heard of it in Madeleine L’Engle’s equally classic book, A Wrinkle in Time.

But then I caught myself, because (1) it’s stupid to think that reading about something for the first time in a science book rather than a science fantasy is anything to be proud of, and (2) in reality I suspect I first heard about it in Robert Heinlein’s (classic!) short story, “–And He Built a Crooked House.” Which is just as fantastical as L’Engle’s book.

So — where did you first hear the word “tesseract”? A great excuse for a poll! Feel free to elaborate in the comments.

Discovering Tesseracts Read More »

35 Comments

Poker Is a Game of Skill

Via the Seriously, Science? blog comes what looks like a pretty bad paper:

Is poker a game of skill or chance? A quasi-experimental study
Gerhard Meyer, Marc von Meduna, Tim Brosowski, Tobias Hayer

Due to intensive marketing and the rapid growth of online gambling, poker currently enjoys great popularity among large sections of the population. Although poker is legally a game of chance in most countries, some (particularly operators of private poker web sites) argue that it should be regarded as a game of skill or sport because the outcome of the game primarily depends on individual aptitude and skill. The available findings indicate that skill plays a meaningful role; however, serious methodological weaknesses and the absence of reliable information regarding the relative importance of chance and skill considerably limit the validity of extant research. Adopting a quasi-experimental approach, the present study examined the extent to which the influence of poker playing skill was more important than card distribution. Three average players and three experts sat down at a six-player table and played 60 computer-based hands of the poker variant “Texas Hold’em” for money. In each hand, one of the average players and one expert received (a) better-than-average cards (winner’s box), (b) average cards (neutral box) and (c) worse-than-average cards (loser’s box). The standardized manipulation of the card distribution controlled the factor of chance to determine differences in performance between the average and expert groups. Overall, 150 individuals participated in a “fixed-limit” game variant, and 150 individuals participated in a “no-limit” game variant. ANOVA results showed that experts did not outperform average players in terms of final cash balance…

(It’s a long abstract, I didn’t copy the whole thing.) The question “Is poker a game of skill or chance?” is a very important one, not least for legal reasons, as governments decide how to regulate the activity. However, while it’s an important question, it’s not actually an interesting one, since the answer is completely obvious: while chance is obviously an element, poker is a game of skill.

Note that chance is an element in many acknowledged games of skill, including things like baseball and basketball. (You’ve heard of “batting averages,” right?) But nobody worries about whether baseball is a game of skill, because there are obvious skill-based factors involved, like strength and hand-eye coordination. So let’s confine our attention to “decision games,” where all you do is sit down and make decisions about one thing or another. This includes games without a probabilistic component, like chess or go, but here we’re interested in games in which chance definitely enters, like poker or blackjack or Monopoly. Call these “probabilistic decision games.” (Presumably there is some accepted terminology for all these things, but I’m just making these terms up.)

So, when does a probabilistic decision game qualify as a “game of skill”? I suggest it does when the following criteria are met:

  1. There are different possible strategies a player could choose.
  2. Some strategies do better than others.
  3. The ideal “dominant strategy” is not known.

It seems perfectly obvious to me that any game fitting these criteria necessarily involves an element of skill — what’s the best strategy to use? It’s also obvious that poker certainly qualifies, as would Monopoly. Games like blackjack or craps do not, since the best possible strategy (or “least bad,” since these games are definite losers in the long run) is known. Among players using that strategy, there’s no more room for skill (outside card-counting or other forms of cheating.)

Nevertheless, people continue to act like this is an interesting question. In the case of this new study, the methodology is pretty crappy, as dissected here. Most obviously, the sample size is laughably small. Each player played only sixty hands; that’s about two hours at a cardroom table, or maybe fifteen minutes or less at a fast online site. And any poker player knows that the variance in the game is quite large, even for the best players; true skill doesn’t show up until a much longer run than that.

More subtly, but worse, the game that was studied wasn’t really poker. If I’m understanding the paper correctly, the cards weren’t dealt randomly, but with pre-determined better-than-average/average/worse-than-average hands. This makes it easy to compare results from different occurrences of the experiment, but it’s not real poker! Crucially, it seems like the players didn’t know about this fake dealing. But one of the crucial elements of skill in poker is understanding the possible distribution of beginning hands. Another element is getting to know your opponents over time, which this experiment doesn’t seem to have allowed for.

On Black Friday in 2011, government officials swept in and locked the accounts of players (including me) on online sites PokerStars and Full Tilt. Part of the reason was alleged corruption on the part of the owners of the sites, but part was because (under certain interpretations of the law) it’s illegal to play poker online in the US. Hopefully someday we’ll grow up and allow adults to place wagers with other adults in the privacy of their own computers.

Poker Is a Game of Skill Read More »

30 Comments

Smooth Life

Chances are you’ve seen Conway’s Game of Life, the checkerboard cellular automaton featuring stable structures, replicators, and all sorts of cool designs. (Plenty of implementations available online.) It’s called “life” because the processes of movement and evolution bear some tangential resemblance to regular organic life, although there are plenty of other reasons to be interested — for example, you can construct a universal computer within the game.

Now John Baez points us to a version called SmoothLife, in which the evolution looks dramatically more “biological.” (Technical paper by Stefan Rafler here, and a playlist with many videos is here.) Rather than just looking at the nearest neighbor sites on a square grid, SmoothLife integrates over a region in the vicinity of each point, with a specified filter function. As a result, everything looks, well, a lot smoother.

Generalized Conway Game of Life - SmoothLife3

While SmoothLife is undeniably more lifelike in appearance, I think the differences between these kinds of simulations and biological life are as important as the similarities. Conway’s original game supports an impressive variety of structures, but it’s not really robust; if you start with a random configuration, chances are good that it will settle down to something boring before too long. My uninformed suspicion is that this is partly due to the fact that cellular automata typically feature irreversible dynamics; you can’t recover the past state from the present. (There are many initial states that evolve to a completely empty grid, for example.) As a result, evolution is not ergodic, exploring a large section of the possible state space; instead, it settles down to some smaller region and stays there. There are some reversible automata, which are quite interesting. To really model actual biology, you would want an automaton that was fundamentally reversible, but in which the system you cared about was coupled to a “low entropy” environment. Don’t know if anyone has attempted anything like that (or whether it would turn out to be interesting).

While I’m giving away all my wonderful ideas, it might be fun to look at cellular automata on random lattices rather than strict rectilinear grids. If the nodes were connected to a random number of neighbors, you would at least avoid the rigid forty-five-degree-ness of the original Life, but it might be harder to think of interesting evolution rules. While we’re at it, we could imagine automata on random lattices that evolved (randomly!) with time. Then you’d essentially be doing automata in the presence of gravity, since the “spacetime” on which the dynamics occurred would be flexible. (Best of all if you could update the lattice in ways that depended on the states of the cells, so that matter would be affecting the geometry.)

Musings along these lines make me more sympathetic to the idea that we’re all living in a computer simulation.

Smooth Life Read More »

8 Comments

Dara O Briain School of Hard Sums

This is an actual TV show in the UK (based on a Japanese program), broadcast on a channel called Dave. In it, Dara O Briain and mathematician Marcus du Sautoy, along with special comedy guests, take on math puzzles (and compete against school-aged math whizzes in the process).

Watch at least the first segment, to see Dara come up with a frikkin’ ingenious solution to a geometry problem.

Could there be a show like this broadcast on TV in the US? Of course not. We only have a thousand channels, there’s no room!

Dara O Briain School of Hard Sums Read More »

19 Comments

How Probability Works

From Barry Greenstein’s insightful poker book, Ace on the River:

Someone shows you a coin with a head and a tail on it. You watch him flip it ten times and all ten times it comes up heads. What is the probability that it will come up heads on the eleventh flip?

A novice gambler would tell you, “Tails is more likely than heads, since things have to even out and tails is due to come up.”

A math student would tell you, “We can’t predict the future from the past. The odds are still even.”

A professional gambler would say, “There must be something wrong with the coin or the way it is being flipped. I wouldn’t bet with the guy flipping it, but I’d bet someone else that heads will come up again.”

Yes I know the math student would really say “individual trials are uncorrelated,” not “we can’t predict the future from the past.” The lesson still holds.

Happy Labor Day, everyone.

How Probability Works Read More »

35 Comments
Scroll to Top