The State of the Early Universe

Well hello, blog. It’s been too long! Feels good to be back.

The big cosmological excitement this week was the announcement of new cosmic microwave background measurements. These include a big release of new papers from the Planck satellite, as well as a joint polarization analysis combining data from BICEP2, the Keck array, and Planck.

Polarization measurements from Planck superimposed on CMB temperature anisotropies. From the Planckoscope, h/t Bob McNees and Raquel Ribeiro.

Polarization measurements from Planck superimposed on CMB temperature anisotropies. From the Planckoscope, h/t Bob McNees and Raquel Ribeiro.

The good news is: we understand the current universe pretty darn well! So much so, in fact, that even an amazingly high-precision instrument such as Planck has a hard time discovering truly new and surprising things about cosmology. Hence, the Planck press releases chose to highlight the finding that the earliest stars formed about 0.1 billion years later than had previously been thought. Which is an awesome piece of science, but doesn’t quite rise to the level of excitement that other possible discoveries might have reached.

Power spectrum of CMB temperature fluctuations, from Planck. Now that is some agreement between theory and experiment!

Power spectrum of CMB temperature fluctuations, from Planck. Now that is some agreement between theory and experiment!

For example, the possibility that we had seen primordial gravitational waves from inflation, as the original announcement of the BICEP2 results suggested back in March. If you’ll remember, the polarization of the CMB can be mathematically decomposed into “E-modes,” which look like gradients and arise naturally from the perturbations in density that we all know and love, and “B-modes,” which look like curls and are not produced (in substantial amounts) from density perturbations. They could be produced by gravitational waves, which in turn could be generated during cosmic inflation — so finding them is a very big deal, indeed.

A big deal that apparently hasn’t happened. As has been suspected for a while now, while BICEP2 did detect B-modes, they seem to have been generated by dust in our galaxy, rather than by gravitational waves during inflation. That is the pretty definitive conclusion from the new Planck/BICEP2/Keck joint analysis.

And therefore, what we had hoped was a detection of primordial gravitational waves now turns into a less-thrilling (but equally scientifically crucial) upper limit. Here’s one way of looking at the situation now. On the horizontal axis we have ns, the “tilt” in the power spectrum of perturbations, i.e. the variation in the amplitude of those perturbations on different distances across space. And on the vertical axis we have r, the ratio of the gravitational waves to the ordinary density perturbations. The original BICEP2 interpretation was that we had discovered r = 0.2; now we see that r is less than 0.15, probably less than 0.10, depending on which pieces of information you combine to get your constraint. No sign that it’s anything other than zero.

Current constraints on the "tilt" of the primordial perturbations (horizontal axis) and the contribution from gravitational waves (vertical axis).

Current constraints on the “tilt” of the primordial perturbations (horizontal axis) and the contribution from gravitational waves (vertical axis).

So what have we learned? Here are some take-away messages. Continue reading

Posted in Science | 26 Comments

We Are All Machines That Think

My answer to this year’s Edge Question, “What Do You Think About Machines That Think?”


Active_brainJulien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there’s not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work, L’homme machine (Man a Machine), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.

As we all know, even today La Mettrie’s ideas aren’t universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.

We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.

Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.

From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.

What’s harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we’ll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.

Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.

Posted in Technology | 141 Comments

Dark Matter, Explained

If you’ve ever wondered about dark matter, or been asked puzzled questions about it by your friends, now you have something to point to: this charming video by 11-year-old Lucas Belz-Koeling. (Hat tip Sir Harry Kroto.)

The title references “Draw My Life style,” which is (the internet informs me) a label given to this kind of fast-motion photography of someone drawing on a white board.

You go, Lucas. I doubt I would have been doing anything quite this good at that age.

Posted in Science | 38 Comments

A Simple Form of Poker “Essentially” Solved

You know it’s a good day when there are refereed articles in Science about poker. (Enthusiasm slightly dampened by the article being behind a paywall, but some details here.)

Poker, of course, is a game of incomplete information. You don’t know your opponent’s cards, they don’t know yours. Part of your goal should be to keep it that way: you don’t want to give away information that would let your opponent figure out what you have.

As a result, the best way to play poker (against a competent opponent) is to use a mixed strategy: in any given situation, you want to have different probabilities for taking various actions, rather than a deterministic assignment of the best thing to do. If, for example, you always raise with certain starting hands, and always call with others, an attentive player will figure that out, and thereby gain a great deal of information about your hand. It’s much better to sometimes play weak hands as if they are strong (bluffing) and strong hands as if they are weak (slow-playing). The question is: how often should you be doing that?

Now researchers at a University of Alberta group that studies computerized poker has offered and “essentially” perfect strategy for a very simple form of poker: Heads-Up Limit Hold’em. In Hold’em, each player has two “hole” cards face down, and there are five “board” cards face-up in the middle of the table; your hand is the best five-card combination you can form from your hole cards and the board. “Heads-up” means that only two players are playing (much simpler than a multi-player game), and “limit” means that there is any bet comes in a single pre-specified amount (much simpler than “no-limit,” where you can bet anything from a fixed minimum up to the size of your stack or your opponent’s, whichever is smaller).

A simple game, but not very simple. Bets occur after each player gets their hole cards, and again after three cards (the “flop”) are put on the board, again after a fourth card (the turn), and finally after the last board card (the river) is revealed. If one player bets, the other can raise, and then the initial better can re-raise, up to a number of bets (typically four) that “caps” the betting.

gl_10537

So a finite number of things can possibly happen, which makes the game amenable to computer analysis. But it’s still a large number. There are about 3×1017 “states” that one can reach in the game, where a “state” is defined by a certain number of bets having been made as well as the configuration of cards that have already been dealt. Not easy to analyze! Fortunately (or not), as a player with incomplete information you won’t be able to distinguish between all of those states — i.e. you don’t know your opponent’s hole cards. So it turns out that there are about 3×1014 distinct “decision points” from which a player might end up having to act.

So all you need to do is: for each of those 300 trillion possibilities, assign the best possible mixed strategy — your probability to bet/check if there hasn’t already been a bet, fold/call/raise if there has — and act accordingly. Hey, nobody ever said being a professional poker player would be easy. (As you might know, human beings are very bad at randomness, so many professionals use the second hand on a wristwatch to generate pseudo-random numbers and guide their actions.)

Nobody is going to do that, of course. Continue reading

Posted in Miscellany | 20 Comments

Life Is the Flame of a Candle

Emperor Has No Clothes Award Last October I was privileged to be awarded the Emperor Has No Clothes award from the Freedom From Religion Foundation. The physical trophy consists of the dashing statuette here on the right, presumably the titular Emperor. It’s made by the same company that makes the Academy Award trophies. (Whenever I run into Meryl Streep, she’s just won’t shut up about how her Oscars are produced by the same company that does the Emperor’s New Clothes award.)

Part of the award-winning is the presentation of a short speech, and I wasn’t sure what to talk about. There are only so many things I have to say, but it’s boring to talk about the same stuff over and over again. More importantly, I have no real interest in giving religion-bashing talks; I care a lot more about doing the hard and constructive work of exploring the consequences of naturalism.

So I decided on a cheerful topic: Death and Physics. I talked about modern science gives us very good reasons to believe (not a proof, never a proof) that there is no such thing as an afterlife. Life is a process, not a substance, and it’s a process that begins, proceeds along for a while, and comes to an end. Certainly something I’ve said before, e.g. in my article on Physics and the Immortality of the Soul, and in the recent Afterlife Debate, but I added a bit more here about entropy, complexity, and what we mean by the word “life.”

If you’re in a reflective mood, here it is. I begin at around 3:50. One of the points I tried to make is that the finitude of life has its upside. Every moment is precious, and what we should value is what is around us right now — because that’s all there is. It’s a scary but exhilarating view of the world.

Posted in Humanity, Philosophy, Religion, Science | 79 Comments

Guest Post: Chip Sebens on the Many-Interacting-Worlds Approach to Quantum Mechanics

Chip Sebens I got to know Charles “Chip” Sebens back in 2012, when he emailed to ask if he could spend the summer at Caltech. Chip is a graduate student in the philosophy department at the University of Michigan, and like many philosophers of physics, knows the technical background behind relativity and quantum mechanics very well. Chip had funding from NSF, and I like talking to philosophers, so I said why not?

We had an extremely productive summer, focusing on our different stances toward quantum mechanics. At the time I was a casual adherent of the Everett (many-worlds) formulation, but had never thought about it carefully. Chip was skeptical, in particular because he thought there were good reasons to believe that EQM should predict equal probabilities for being on any branch of the wave function, rather than the amplitude-squared probabilities of the real-world Born Rule. Fortunately, I won, although the reason I won was mostly because Chip figured out what was going on. We ended up writing a paper explaining why the Born Rule naturally emerges from EQM under some simple assumptions. Now I have graduated from being a casual adherent to a slightly more serious one.

But that doesn’t mean Everett is right, and it’s worth looking at other formulations. Chip was good enough to accept my request that he write a guest blog post about another approach that’s been in the news lately: a “Newtonian” or “Many-Interacting-Worlds” formulation of quantum mechanics, which he has helped to pioneer.


In Newtonian physics objects always have definite locations. They are never in two places at once. To determine how an object will move one simply needs to add up the various forces acting on it and from these calculate the object’s acceleration. This framework is generally taken to be inadequate for explaining the quantum behavior of subatomic particles like electrons and protons. We are told that quantum theory requires us to revise this classical picture of the world, but what picture of reality is supposed to take its place is unclear. There is little consensus on many foundational questions: Is quantum randomness fundamental or a result of our ignorance? Do electrons have well-defined properties before measurement? Is the Schrödinger equation always obeyed? Are there parallel universes?

Some of us feel that the theory is understood well enough to be getting on with. Even though we might not know what electrons are up to when no one is looking, we know how to apply the theory to make predictions for the results of experiments. Much progress has been made―observe the wonder of the standard model―without answering these foundational questions. Perhaps one day with insight gained from new physics we can return to these basic questions. I will call those with such a mindset the doers. Richard Feynman was a doer:

“It will be difficult. But the difficulty really is psychological and exists in the perpetual torment that results from your saying to yourself, ‘But how can it be like that?’ which is a reflection of uncontrolled but utterly vain desire to see it in terms of something familiar. I will not describe it in terms of an analogy with something familiar; I will simply describe it. … I think I can safely say that nobody understands quantum mechanics. … Do not keep saying to yourself, if you can possibly avoid it, ‘But how can it be like that?’ because you will get ‘down the drain’, into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that.”

-Feynman, The Character of Physical Law (chapter 6, pg. 129)

In contrast to the doers, there are the dreamers. Dreamers, although they may often use the theory without worrying about its foundations, are unsatisfied with standard presentations of quantum mechanics. They want to know “how it can be like that” and have offered a variety of alternative ways of filling in the details. Doers denigrate the dreamers for being unproductive, getting lost “down the drain.” Dreamers criticize the doers for giving up on one of the central goals of physics, understanding nature, to focus exclusively on another, controlling it. But even by the lights of the doer’s primary mission―being able to make accurate predictions for a wide variety of experiments―there are reasons to dream:

“Suppose you have two theories, A and B, which look completely different psychologically, with different ideas in them and so on, but that all consequences that are computed from each are exactly the same, and both agree with experiment. … how are we going to decide which one is right? There is no way by science, because they both agree with experiment to the same extent. … However, for psychological reasons, in order to guess new theories, these two things may be very far from equivalent, because one gives a man different ideas from the other. By putting the theory in a certain kind of framework you get an idea of what to change. … Therefore psychologically we must keep all the theories in our heads, and every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics.”

-Feynman, The Character of Physical Law (chapter 7, pg. 168)

In the spirit of finding alternative versions of quantum mechanics―whether they agree exactly or only approximately on experimental consequences―let me describe an exciting new option which has recently been proposed by Hall, Deckert, and Wiseman (in Physical Review X) and myself (forthcoming in Philosophy of Science), receiving media attention in: Nature, New Scientist, Cosmos, Huffington Post, Huffington Post Blog, FQXi podcast… Somewhat similar ideas have been put forward by Böstrom, Schiff and Poirier, and Tipler. The new approach seeks to take seriously quantum theory’s hydrodynamic formulation which was developed by Erwin Madelung in the 1920s. Although the proposal is distinct from the many-worlds interpretation, it also involves the postulation of parallel universes. The proposed multiverse picture is not the quantum mechanics of college textbooks, but just because the theory looks so “completely different psychologically” it might aid the development of new physics or new calculational techniques (even if this radical picture of reality ultimately turns out to be incorrect).

Let’s begin with an entirely reasonable question a dreamer might ask about quantum mechanics.

“I understand water waves and sound waves. These waves are made of particles. A sound wave is a compression wave that results from particles of air bunching up in certain regions and vacating other. Waves play a central role in quantum mechanics. Is it possible to understand these waves as being made of some things?”

There are a variety of reasons to think the answer is no, but they can be overcome. In quantum mechanics, the state of a system is described by a wave function Ψ. Consider a single particle in the famous double-slit experiment. In this experiment the one particle initially passes through both slits (in its quantum way) and then at the end is observed hitting somewhere on a screen. The state of the particle is described by a wave function which assigns a complex number to each point in space at each time. The wave function is initially centered on the two slits. Then, as the particle approaches the detection screen, an interference pattern emerges; the particle behaves like a wave.

Figure 1: The evolution of Ψ with the amount of color proportional to the amplitude (a.k.a. magnitude) and the hue indicating the phase of Ψ.

Figure 1: The evolution of Ψ with the amount of color proportional to the amplitude (a.k.a. magnitude) and the hue indicating the phase of Ψ.

There’s a problem with thinking of the wave as made of something: the wave function assigns strange complex numbers to points in space instead of familiar real numbers. This can be resolved by focusing on |Ψ|2, the squared amplitude of the wave function, which is always a positive real number.

Figure 2: The evolution of |Ψ|2.

Figure 2: The evolution of |Ψ|2.

We normally think of |Ψ|2 as giving the probability of finding the particle somewhere. But, to entertain the dreamer’s idea about quantum waves, let’s instead think of |Ψ|2 as giving a density of particles. Whereas figure 2 is normally interpreted as showing the evolution of the probability distribution for a single particle, instead understand it as showing the distribution of a large number of particles: initially bunched up at the two slits and later spread out in bands at the detector (figure 3). Although I won’t go into the details here, we can actually understand the way that wave changes in time as resulting from interactions between these particles, from the particles pushing each other around. The Schrödinger equation, which is normally used to describe the way the wave function changes, is then viewed as consequence of this interaction.

Figure 3: The evolution of particles with |Ψ|2 as the density. This animation is meant to help visualize the idea, but don’t take the precise motions of the particles too seriously. Although we know how the particles move en masse, we don’t know precisely how individual particles move.

Figure 3: The evolution of particles with |Ψ|2 as the density. This animation is meant to help visualize the idea, but don’t take the precise motions of the particles too seriously. Although we know how the particles move en masse, we don’t know precisely how individual particles move.

In solving the problem about complex numbers, we’ve created two new problems: How can there really be a large number of particles if we only ever see one show up on the detector at the end? If |Ψ|2 is now telling us about densities and not probabilities, what does it have to do with probabilities?

Removing a simplification in the standard story will help. Instead of focusing on the wave function of a single particle, let’s consider all particles at once. To describe the state of a collection of particles it turns out we can’t just give each particle its own wave function. This would miss out on an important feature of quantum mechanics: entanglement. The state of one particle may be inextricably linked to the state of another. Instead of having a wave function for each particle, a single universal wave function describes the collection of particles.

The universal wave function takes as input a position for each particle as well as the time. The position of a single particle is given by a point in familiar three dimensional space. The positions of all particles can be given by a single point in a very high dimensional space, configuration space: the first three dimensions of configuration space give the position of particle 1, the next three give the position of particle 2, etc. The universal wave function Ψ assigns a complex number to each point of configuration space at each time.  |Ψ|2 then assigns a positive real number to each point of configuration space (at each time). Can we understand this as a density of some things?

A single point in configuration space specifies the locations of all particles, a way all things might be arranged, a way the world might be. If there is only one world, then only one point in configuration space is special: it accurately captures where all the particles are. If there are many worlds, then many points in configuration space are special: each accurately captures where the particles are in some world. We could describe how densely packed these special points are, which regions of configuration space contain many worlds and which regions contain few. We can understand |Ψ|2 as giving the density of worlds in configuration space. This might seem radical, but it is the natural extension of the answer to the dreamer’s question depicted in figure 3.

Now that we have moved to a theory with many worlds, the first problem above can be answered: The reason that we only see one particle hit the detector in the double-slit experiment is that only one of the particles in figure 3 is in our world. When the particles hit the detection screen at the end we only see our own. The rest of the particles, though not part of our world, do interact with ours. They are responsible for the swerves in our particle’s trajectory. (Because of this feature, Hall, Deckert, and Wiseman have coined the name “Many Interacting Worlds” for the approach.)

Figure 4: The evolution of particles in figure 3 with the particle that lives in our world highlighted.

Figure 4: The evolution of particles in figure 3 with the particle that lives in our world highlighted.

No matter how knowledgeable and observant you are, you cannot know precisely where every single particle in the universe is located. Put another way, you don’t know where our world is located in configuration space. Since the regions of configuration space where |Ψ|2 is large have more worlds in them and more people like you wondering which world they’re in, you should expect to be in a region of configuration space where|Ψ|2 is large. (Aside: this strategy of counting each copy of oneself as equally likely is not so plausible in the old many-worlds interpretation.) Thus the connection between |Ψ|2 and probability is not a fundamental postulate of the theory, but a result of proper reasoning given this picture of reality.

There is of course much more to the story than what’s been said here. One particularly intriguing consequence of the new approach is that the three sentence characterization of Newtonian physics with which this post began is met. In that sense, this theory makes quantum mechanics look like classical physics. For this reason, in my paper I gave the theory the name “Newtonian Quantum Mechanics.”

Posted in Guest Post, Philosophy, Science | 55 Comments

Slow Life

Watch and savor this remarkable video by Daniel Stoupin. It shows tiny marine animals in motion — motions that are typically so slow that we would never notice, here enormously sped-up so that humans can appreciate them.

Slow Life from Daniel Stoupin on Vimeo.

I found it at this blog post by Peter Godfrey-Smith, a philosopher of biology. He notes that some kinds of basic processes, like breathing, are likely common to creatures that live at all different timescales; but others, like reaching out and grasping things, might not be open to creatures in the slow domain. Which raises the question: what kinds of motion are available to slow life that we fast-movers can’t experience?

Not all timescales are created equal. In the real world, the size of atoms sets a fundamental length, and chemical reactions set fundamental times, out of which everything larger is composed. We will never find a naturally-occurring life form, here on Earth or elsewhere in the universe, whose heart beats once per zeptosecond. But who knows? Maybe there are beings whose “hearts” beat once per millennium.

Posted in Arts, Science | 11 Comments

Where Have We Tested Gravity?

General relativity is a rich theory that makes a wide variety of experimental predictions. It’s been tested many ways, and always seems to pass with flying colors. But there’s always the possibility that a different test in a new regime will reveal some anomalous behavior, which would open the door to a revolution in our understanding of gravity. (I didn’t say it was a likely possibility, but you don’t know until you try.)

Not every experiment tests different things; sometimes one set of observations is done with a novel technique, but is actually just re-examining a physical regime that has already been well-explored. So it’s interesting to have a handle on what regimes we have already tested. For GR, that’s not such an easy question; it’s difficult to compare tests like gravitational redshift, the binary pulsar, and Big Bang nucleosynthesis.

So it’s good to see a new paper that at least takes a stab at putting it all together:

Linking Tests of Gravity On All Scales: from the Strong-Field Regime to Cosmology
Tessa Baker, Dimitrios Psaltis, Constantinos Skordis

The current effort to test General Relativity employs multiple disparate formalisms for different observables, obscuring the relations between laboratory, astrophysical and cosmological constraints. To remedy this situation, we develop a parameter space for comparing tests of gravity on all scales in the universe. In particular, we present new methods for linking cosmological large-scale structure, the Cosmic Microwave Background and gravitational waves with classic PPN tests of gravity. Diagrams of this gravitational parameter space reveal a noticeable untested regime. The untested window, which separates small-scale systems from the troubled cosmological regime, could potentially hide the onset of corrections to General Relativity.

The idea is to find a simple way of characterizing different tests of GR so that they can be directly compared. This will always be something of an art as well as a science — the metric tensor has ten independent parameters (six of which are physical, given four coordinates we can choose), and there are a lot of ways they can combine together, so there’s little hope of a parameterization that is both easy to grasp and covers all bases.

Still, you can make some reasonable assumptions and see whether you make progress. Baker et al. have defined two parameters: the “Potential” ε, which roughly tells you how deep the gravitational well is, and the “Curvature” ξ, which tells you how strongly the field is changing through space. Again — these are reasonable things to look at, but not really comprehensive. Nevertheless, you can make a nice plot that shows where different experimental constraints lie in your new parameter space.

baker-etal

The nice thing is that there’s a lot of parameter space that is unexplored! You can think of this plot as a finding chart for experimenters who want to dream up new ways to test our best understanding of gravity in new regimes.

One caveat: it would be extremely surprising indeed if gravity didn’t conform to GR in these regimes. The philosophy of effective field theory gives us a very definite expectation for where our theories should break down: on length scales shorter than where we have tested the theory. It would be weird, although certainly not impossible, for a theory of gravity to work with exquisite precision in our Solar System, but break down on the scales of galaxies or cosmology. It’s not impossible, but that fact should weigh heavily in one’s personal Bayesian priors for finding new physics in this kind of regime. Just another way that Nature makes life challenging for we poor human physicists.

Posted in arxiv, Science | 32 Comments

Einstein’s Papers Online

If any scientist in recent memory deserves to have every one of their words captured and distributed widely, it’s Albert Einstein. Surprisingly, many of his writings have been hard to get a hold of, especially in English; he wrote an awful lot, and mostly in German. The Einstein Papers Project has been working heroically to correct that, and today marks a major step forward: the release of the Digital Einstein Papers, an open resource that puts the master’s words just a click away.

As Dennis Overbye reports in the NYT, the Einstein Papers Project has so far released 14 of a projected 30 volumes of thick, leather-bound collections of Einstein’s works, as well as companion English translations in paperback. That’s less than half, but it does cover the years 1903-1917 when Einstein was turning physics on its head. You can read On the Electrodynamics of Moving Bodies, where special relativity was introduced in full, or the very short (3 pages!) follow-up Does the Inertia of a Body Depend on Its Energy Content?, where he derived the relation that we would now write as E = mc2. Interestingly, most of Einstein’s earliest papers were on statistical mechanics and the foundations of thermodynamics.

Ten years later he is putting the final touches on general relativity, whose centennial we will be celebrating next year. This masterwork took longer to develop, and Einstein crept up on its final formulation gradually, so you see the development spread out over a number of papers, achieving its ultimate form in The Field Equations of Gravitation in 1915.

What a compelling writer Einstein was! (Not all great scientists are.) Here is the opening of one foundational paper from 1914, The Formal Foundation of the General Theory of Relativity:

In recent years I have worked, in part together with my friend Grossman, on a generalization of the theory of relativity. During these investigations, a kaleidoscopic mixture of postulates from physics and mathematics has been introduced and used as heuristical tools; as a consequence it is not easy to see through and characterize the theory from a formal mathematical point of view, that is, only based on these papers. The primary objective of the present paper is to close this gap. In particular, it has been possible to obtain the equations of the gravitational field in a purely covariance-theoretical manner (section D). I also tried to give simple derivations of the basic laws of absolute differential calculus — in part, they are probably new ones (section B) — in order to allow the reader to get a complete grasp of the theory without having to read other, purely mathematical tracts. As an illustration of the mathematical methods, I derived the (Eulerian) equations of hydrodynamics and the field equations of the electrodynamics of moving bodies (section C). Section E shows that Newton’s theory of gravitation follows from the general theory as an approximation. The most elementary features of the present theory are also derived inasfar as they are characteristic of a Newtonian (static) gravitational field (curvature of light rays, shift of spectral lines).

While Einstein certainly did have help from Grossman and others, to a large extent the theory of general relativity was all his own. It stands in stark contrast to quantum mechanics or almost all modern theories, which have grown up through the collaborative effort of many smart people. We may never again in physics see a paragraph of such sweep and majesty — “Here is my revolutionary theory of the dynamics of space and time, along with a helpful introduction to its mathematical underpinnings, as well as derivations of all the previous laws of physics within this powerful new framework.”

Thanks to everyone at the Einstein Papers project for undertaking this enormous task.

Posted in Science, Words | 35 Comments

Thanksgiving

This year we give thanks for a technique that is central to both physics and mathematics: the Fourier transform. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, and Landauer’s Principle.)

Let’s say you want to locate a point in space — for simplicity, on a two-dimensional plane. You could choose a coordinate system (x, y), and then specify the values of those coordinates to pick out your point: (x, y) = (1, 3).

axes-rotate

But someone else might want to locate the same point, but they want to use a different coordinate system. That’s fine; points are real, but coordinate systems are just convenient fictions. So your friend uses coordinates (u, v) instead of (x, y). Fortunately, you know the relationship between the two systems: in this case, it’s u = y+x, v = y-x. The new coordinates are rotated (and scaled) with respect to the old ones, and now the point is represented as (u, v) = (4, 2).

Fourier transforms are just a fancy version of changes of coordinates. The difference is that, instead of coordinates on a two-dimensional space, we’re talking about coordinates on an infinite-dimensional space: the space of all functions. (And for technical reasons, Fourier transforms naturally live in the world of complex functions, where the value of the function at any point is a complex number.)

Think of it this way. To specify some function f(x), we give the value of the function f for every value of the variable x. In principle, an infinite number of numbers. But deep down, it’s not that different from giving the location of our point in the plane, which was just two numbers. We can certainly imagine taking the information contained in f(x) and expressing it in a different way, by “rotating the axes.”

That’s what a Fourier transform is. It’s a way of specifying a function that, instead of telling you the value of the function at each point, tells you the amount of variation at each wavelength. Just as we have a formula for switching between (u, v) and (x, y), there are formulas for switching between a function f(x) and its Fourier transform f(ω):

f(\omega) = \frac{1}{\sqrt{2\pi}} \int dx f(x) e^{-i\omega x}
f(x) = \frac{1}{\sqrt{2\pi}} \int d\omega f(\omega) e^{i\omega x}.

Absorbing those formulas isn’t necessary to get the basic idea. If the function itself looks like a sine wave, it has a specific wavelength, and the Fourier transform is just a delta function (infinity at that particular wavelength, zero everywhere else). If the function is periodic but a bit more complicated, it might have just a few Fourier components.

MIT researchers showing how sine waves can combine to make a square-ish wave.

MIT researchers showing how sine waves can combine to make a square-ish wave.

In general, the Fourier transform f(ω) gives you “the amount of the original function that is periodic with period 2πω.” This is sometimes called the “frequency domain,” since there are obvious applications to signal processing, where we might want to take a signal that has an intensity that varies with time and pick out the relative strength of different frequencies. (Your eyes and ears do this automatically, when they decompose light into colors and sound into pitches. They’re just taking Fourier transforms.) Frequency, of course, is the inverse of wavelength, so it’s equally good to think of the Fourier transform as describing the “length domain.” A cosmologist who studies the large-scale distribution of galaxies will naturally take the Fourier transform of their positions to construct the power spectrum, revealing how much structure there is at different scales.

microcontrollers_fft_example

To my (biased) way of thinking, where Fourier transforms really come into their own is in quantum field theory. QFT tells us that the world is fundamentally made of waves, not particles, and it is extremely convenient to think about those waves by taking their Fourier transforms. (It is literally one of the first things one is told to do in any introduction to QFT.)

But it’s not just convenient, it’s a worldview-changing move. One way of characterizing Ken Wilson’s momentous achievement is to say “physics is organized by length scale.” Phenomena at high masses or energies are associated with short wavelengths, where our low-energy long-wavelength instruments cannot probe. (We need giant machines like the Large Hadron Collider to create high energies, because what we are really curious about are short distances.) But we can construct a perfectly good effective theory of just the wavelengths longer than a certain size — whatever size it is that our theoretical picture can describe. As physics progresses, we bring smaller and smaller length scales under the umbrella of our understanding.

Without Fourier transforms, this entire way of thinking would be inaccessible. We should be very thankful for them — as long as we use them wisely.

Credit: xkcd.

Note that Joseph Fourier, inventor of the transform, is not the same as Charles Fourier, utopian philosopher. Joseph, in addition to his work in math and physics, invented the idea of the greenhouse effect. Sadly that’s not something we should be thankful for right now.

Posted in Math, Science | 26 Comments