Julien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there’s not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work, *L’homme machine* (*Man a Machine*), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.

As we all know, even today La Mettrie’s ideas aren’t universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: *Hey, those are my friends you’re talking about.* We are all machines that think, and the distinction between different types of machines is eroding.

We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.

Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.

From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.

What’s harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we’ll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.

Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.

]]>The title references “Draw My Life style,” which is (the internet informs me) a label given to this kind of fast-motion photography of someone drawing on a white board.

You go, Lucas. I doubt I would have been doing anything quite this good at that age.

]]>Poker, of course, is a game of incomplete information. You don’t know your opponent’s cards, they don’t know yours. Part of your goal should be to keep it that way: you don’t want to give away information that would let your opponent figure out what you have.

As a result, the best way to play poker (against a competent opponent) is to use a mixed strategy: in any given situation, you want to have different probabilities for taking various actions, rather than a deterministic assignment of the best thing to do. If, for example, you always raise with certain starting hands, and always call with others, an attentive player will figure that out, and thereby gain a great deal of information about your hand. It’s much better to sometimes play weak hands as if they are strong (bluffing) and strong hands as if they are weak (slow-playing). The question is: how often should you be doing that?

Now researchers at a University of Alberta group that studies computerized poker has offered and “essentially” perfect strategy for a very simple form of poker: Heads-Up Limit Hold’em. In Hold’em, each player has two “hole” cards face down, and there are five “board” cards face-up in the middle of the table; your hand is the best five-card combination you can form from your hole cards and the board. “Heads-up” means that only two players are playing (much simpler than a multi-player game), and “limit” means that there is any bet comes in a single pre-specified amount (much simpler than “no-limit,” where you can bet anything from a fixed minimum up to the size of your stack or your opponent’s, whichever is smaller).

A simple game, but not *very* simple. Bets occur after each player gets their hole cards, and again after three cards (the “flop”) are put on the board, again after a fourth card (the turn), and finally after the last board card (the river) is revealed. If one player bets, the other can raise, and then the initial better can re-raise, up to a number of bets (typically four) that “caps” the betting.

So a finite number of things can possibly happen, which makes the game amenable to computer analysis. But it’s still a large number. There are about 3×10^{17} “states” that one can reach in the game, where a “state” is defined by a certain number of bets having been made as well as the configuration of cards that have already been dealt. Not easy to analyze! Fortunately (or not), as a player with incomplete information you won’t be able to distinguish between all of those states — i.e. you don’t know your opponent’s hole cards. So it turns out that there are about 3×10^{14} distinct “decision points” from which a player might end up having to act.

So all you need to do is: for each of those 300 trillion possibilities, assign the best possible mixed strategy — your probability to bet/check if there hasn’t already been a bet, fold/call/raise if there has — and act accordingly. Hey, nobody ever said being a professional poker player would be easy. (As you might know, human beings are very bad at randomness, so many professionals use the second hand on a wristwatch to generate pseudo-random numbers and guide their actions.)

Nobody is going to do that, of course. So there might be a worry that advances like this will render human beings superfluous at playing poker. It’s not a realistic worry quite yet; despite it’s apparent complexity, Heads-up Limit is a sufficiently simple game that it’s almost never played by real people. Heads-up No-Limit, and multiplayer versions of both limit and no-limit, are played quite frequently; but they’re also much harder games to solve.

Which shouldn’t be taken as disparagement of the amazing things the Alberta group has achieved. To put it in context, they provide a plot of the sizes of imperfect-information games that have previously been essentially solved. Before this work, the most complicated solved game had less than 10^{11} decision points, whereas Heads-Up Limit Hold’em has over 10^{13} (after accounting for some symmetries in game configurations, which I presume means cards of equal value but different suits, etc.).

An impressive leap forward, relying on a technique known as “counterfactual regret minimization.” Essentially, the computer acts randomly at first, but keeps track of how much it regrets making a decision when things go badly, and adjusts its mixed strategy appropriately. Eventually an equilibrium is reached in which regrets are minimized. (If only we were allowed to live our lives this way…)

And, despite the simplicity of the game, you can certainly learn something from the strategy that the computer has worked out. Here is a plot with suggested actions for a couple of early decisions: on the left, what the dealer (who acts first) should do on their opening move, depending on what their hole cards are; on the right, what the other player should do if the dealer chose to raise. Red is fold, blue is call, and green is raise. (The dealer can fold as a first move, since the other player is forced to make a “blind” bet to start the action.)

Note that there is no blue in the left diagram. That means that the dealer should never simply call the blind bet without raising — a move known in the trade as “limping.” That fits well with received poker wisdom, which generally looks down on limping as an overly tentative and ultimately counter-productive strategy. Of course, no-limit is a different game, and one might still imagine a place for limping with a strong hand, since you can hope to get raised and then come over the top with a huge bet. But it’s food for thought for poker experts. (Also, note the lack of red in the right diagram — folding to a single bet is a bad idea with all but the worst hole cards. Hmm…)

As I said, this doesn’t affect real players in casinos very much, or at least not yet. Online poker is a different table of fish, of course. In the US, online poker is essentially dead, having been squelched by the federal government on what’s known in the community as Black Friday. But before that happened, there was already serious worry about online “bots” that could play many hands and beat all but the best human players. And the curve showing the growth in complexity of solved games is pretty impressive; it’s not impossible to imagine that No-Limit will be “essentially solved” in the foreseeable future.

Fortunately, like chess, knowing that there are computers better than you at poker doesn’t take away the pleasure of playing with ordinary human opponents. And if they tend to limp a lot, your pleasure might be even higher.

]]>Part of the award-winning is the presentation of a short speech, and I wasn’t sure what to talk about. There are only so many things I have to say, but it’s boring to talk about the same stuff over and over again. More importantly, I have no real interest in giving religion-bashing talks; I care a lot more about doing the hard and constructive work of exploring the consequences of naturalism.

So I decided on a cheerful topic: Death and Physics. I talked about modern science gives us very good reasons to believe (not a proof, never a proof) that there is no such thing as an afterlife. Life is a process, not a substance, and it’s a process that begins, proceeds along for a while, and comes to an end. Certainly something I’ve said before, e.g. in my article on Physics and the Immortality of the Soul, and in the recent Afterlife Debate, but I added a bit more here about entropy, complexity, and what we mean by the word “life.”

If you’re in a reflective mood, here it is. I begin at around 3:50. One of the points I tried to make is that the finitude of life has its upside. Every moment is precious, and what we should value is what is around us right now — because that’s all there is. It’s a scary but exhilarating view of the world.

]]>We had an extremely productive summer, focusing on our different stances toward quantum mechanics. At the time I was a casual adherent of the Everett (many-worlds) formulation, but had never thought about it carefully. Chip was skeptical, in particular because he thought there were good reasons to believe that EQM should predict equal probabilities for being on any branch of the wave function, rather than the amplitude-squared probabilities of the real-world Born Rule. Fortunately, I won, although the reason I won was mostly because Chip figured out what was going on. We ended up writing a paper explaining why the Born Rule naturally emerges from EQM under some simple assumptions. Now I have graduated from being a casual adherent to a slightly more serious one.

But that doesn’t mean Everett is right, and it’s worth looking at other formulations. Chip was good enough to accept my request that he write a guest blog post about another approach that’s been in the news lately: a “Newtonian” or “Many-Interacting-Worlds” formulation of quantum mechanics, which he has helped to pioneer.

In Newtonian physics objects always have definite locations. They are never in two places at once. To determine how an object will move one simply needs to add up the various forces acting on it and from these calculate the object’s acceleration. This framework is generally taken to be inadequate for explaining the quantum behavior of subatomic particles like electrons and protons. We are told that quantum theory requires us to revise this classical picture of the world, but what picture of reality is supposed to take its place is unclear. There is little consensus on many foundational questions: Is quantum randomness fundamental or a result of our ignorance? Do electrons have well-defined properties before measurement? Is the Schrödinger equation always obeyed? Are there parallel universes?

Some of us feel that the theory is understood well enough to be getting on with. Even though we might not know what electrons are up to when no one is looking, we know how to apply the theory to make predictions for the results of experiments. Much progress has been made―observe the wonder of the standard model―without answering these foundational questions. Perhaps one day with insight gained from new physics we can return to these basic questions. I will call those with such a mindset the **doers**. Richard Feynman was a doer:

“It will be difficult. But the difficulty really is psychological and exists in the perpetual torment that results from your saying to yourself, ‘But how can it be like that?’ which is a reflection of uncontrolled but utterly vain desire to see it in terms of something familiar. I will

notdescribe it in terms of an analogy with something familiar; I will simply describe it. … I think I can safely say that nobody understands quantum mechanics. … Do not keep saying to yourself, if you can possibly avoid it, ‘But how can it be like that?’ because you will get ‘down the drain’, into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that.”-Feynman,

The Character of Physical Law(chapter 6, pg. 129)

In contrast to the doers, there are the **dreamers**. Dreamers, although they may often use the theory without worrying about its foundations, are unsatisfied with standard presentations of quantum mechanics. They want to know “how it can be like that” and have offered a variety of alternative ways of filling in the details. Doers denigrate the dreamers for being unproductive, getting lost “down the drain.” Dreamers criticize the doers for giving up on one of the central goals of physics, understanding nature, to focus exclusively on another, controlling it. But even by the lights of the doer’s primary mission―being able to make accurate predictions for a wide variety of experiments―there are reasons to dream:

“Suppose you have two theories, A and B, which look completely different psychologically, with different ideas in them and so on, but that all consequences that are computed from each are exactly the same, and both agree with experiment. … how are we going to decide which one is right? There is no way by science, because they both agree with experiment to the same extent. … However, for psychological reasons, in order to guess new theories, these two things may be very far from equivalent, because one gives a man different ideas from the other. By putting the theory in a certain kind of framework you get an idea of what to change. … Therefore psychologically we must keep all the theories in our heads, and every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics.”

-Feynman,

The Character of Physical Law(chapter 7, pg. 168)

In the spirit of finding alternative versions of quantum mechanics―whether they agree exactly or only approximately on experimental consequences―let me describe an exciting new option which has recently been proposed by Hall, Deckert, and Wiseman (in *Physical Review X*) and myself (forthcoming in *Philosophy of Science*), receiving media attention in: Nature, New Scientist, Cosmos, Huffington Post, Huffington Post Blog, FQXi podcast… Somewhat similar ideas have been put forward by Böstrom, Schiff and Poirier, and Tipler. The new approach seeks to take seriously quantum theory’s hydrodynamic formulation which was developed by Erwin Madelung in the 1920s. Although the proposal is distinct from the many-worlds interpretation, it also involves the postulation of parallel universes. The proposed multiverse picture is not the quantum mechanics of college textbooks, but just because the theory looks so “completely different psychologically” it might aid the development of new physics or new calculational techniques (even if this radical picture of reality ultimately turns out to be incorrect).

Let’s begin with an entirely reasonable question a dreamer might ask about quantum mechanics.

“I understand water waves and sound waves. These waves are

made of particles. A sound wave is a compression wave that results from particles of air bunching up in certain regions and vacating other. Waves play a central role in quantum mechanics. Is it possible to understand these waves as beingmade ofsome things?”

There are a variety of reasons to think the answer is no, but they can be overcome. In quantum mechanics, the state of a system is described by a **wave function** Ψ. Consider a *single particle* in the famous double-slit experiment. In this experiment the one particle initially passes through both slits (in its quantum way) and then at the end is observed hitting somewhere on a screen. The state of the particle is described by a wave function which assigns a complex number to each point in space at each time. The wave function is initially centered on the two slits. Then, as the particle approaches the detection screen, an interference pattern emerges; the particle behaves like a wave.

There’s a problem with thinking of the wave as made of something: the wave function assigns strange complex numbers to points in space instead of familiar real numbers. This can be resolved by focusing on |Ψ|^{2}, the squared amplitude of the wave function, which is always a positive real number.

We normally think of |Ψ|^{2} as giving the probability of finding the particle somewhere. But, to entertain the dreamer’s idea about quantum waves, let’s instead think of |Ψ|^{2} as giving a *density* of particles. Whereas figure 2 is normally interpreted as showing the evolution of the probability distribution for a *single* particle, instead understand it as showing the distribution of a *large number* of particles: initially bunched up at the two slits and later spread out in bands at the detector (figure 3). Although I won’t go into the details here, we can actually understand the way that wave changes in time as resulting from *interactions* between these particles, from the particles pushing each other around. The Schrödinger equation, which is normally used to describe the way the wave function changes, is then viewed as consequence of this interaction.

In solving the problem about complex numbers, we’ve created two new problems: How can there really be a large number of particles if we only ever see one show up on the detector at the end? If |Ψ|^{2} is now telling us about densities and not probabilities, what does it have to do with probabilities?

Removing a simplification in the standard story will help. Instead of focusing on the wave function of a single particle, let’s consider all particles at once. To describe the state of a collection of particles it turns out we can’t just give each particle its own wave function. This would miss out on an important feature of quantum mechanics: entanglement. The state of one particle may be inextricably linked to the state of another. Instead of having a wave function for each particle, a single universal wave function describes the collection of particles.

The universal wave function takes as input a position for each particle as well as the time. The position of a *single particle* is given by a point in familiar three dimensional space. The positions of *all particles* can be given by a single point in a very high dimensional space, *configuration space*: the first three dimensions of configuration space give the position of particle 1, the next three give the position of particle 2, etc. The universal wave function Ψ assigns a complex number to each point of *configuration space* at each time. |Ψ|^{2} then assigns a positive real number to each point of configuration space (at each time). Can we understand this as a density of some things?

A single point in configuration space specifies the locations of all particles, a way all things might be arranged, a way the world might be. If there is only one world, then only one point in configuration space is special: it accurately captures where all the particles are. If there are many worlds, then many points in configuration space are special: each accurately captures where the particles are in some world. We could describe how densely packed these special points are, which regions of configuration space contain many worlds and which regions contain few. We can understand |Ψ|^{2} as giving the *density of worlds* in configuration space. This might seem radical, but it is the natural extension of the answer to the dreamer’s question depicted in figure 3.

Now that we have moved to a theory with many worlds, the first problem above can be answered: The reason that we only see one particle hit the detector in the double-slit experiment is that only one of the particles in figure 3 is in our world. When the particles hit the detection screen at the end we only see our own. The rest of the particles, though not part of our world, do interact with ours. They are responsible for the swerves in our particle’s trajectory. (Because of this feature, Hall, Deckert, and Wiseman have coined the name “Many Interacting Worlds” for the approach.)

No matter how knowledgeable and observant you are, you cannot know precisely where every single particle in the universe is located. Put another way, you don’t know where our world is located in configuration space. Since the regions of configuration space where |Ψ|^{2} is large have more worlds in them and more people like you wondering which world they’re in, you should expect to be in a region of configuration space where|Ψ|^{2} is large. (Aside: this strategy of counting each copy of oneself as equally likely is not so plausible in the old many-worlds interpretation.) Thus the connection between |Ψ|^{2} and probability is not a fundamental postulate of the theory, but a result of proper reasoning given this picture of reality.

There is of course much more to the story than what’s been said here. One particularly intriguing consequence of the new approach is that the three sentence characterization of Newtonian physics with which this post began is met. In that sense, this theory makes quantum mechanics look like classical physics. For this reason, in my paper I gave the theory the name “Newtonian Quantum Mechanics.”

]]>

Slow Life from Daniel Stoupin on Vimeo.

I found it at this blog post by Peter Godfrey-Smith, a philosopher of biology. He notes that some kinds of basic processes, like breathing, are likely common to creatures that live at all different timescales; but others, like reaching out and grasping things, might not be open to creatures in the slow domain. Which raises the question: what kinds of motion are available to slow life that we fast-movers can’t experience?

Not all timescales are created equal. In the real world, the size of atoms sets a fundamental length, and chemical reactions set fundamental times, out of which everything larger is composed. We will never find a naturally-occurring life form, here on Earth or elsewhere in the universe, whose heart beats once per zeptosecond. But who knows? Maybe there are beings whose “hearts” beat once per millennium.

]]>Not every experiment tests different things; sometimes one set of observations is done with a novel technique, but is actually just re-examining a physical regime that has already been well-explored. So it’s interesting to have a handle on what regimes we have already tested. For GR, that’s not such an easy question; it’s difficult to compare tests like gravitational redshift, the binary pulsar, and Big Bang nucleosynthesis.

So it’s good to see a new paper that at least takes a stab at putting it all together:

Linking Tests of Gravity On All Scales: from the Strong-Field Regime to Cosmology

Tessa Baker, Dimitrios Psaltis, Constantinos SkordisThe current effort to test General Relativity employs multiple disparate formalisms for different observables, obscuring the relations between laboratory, astrophysical and cosmological constraints. To remedy this situation, we develop a parameter space for comparing tests of gravity on all scales in the universe. In particular, we present new methods for linking cosmological large-scale structure, the Cosmic Microwave Background and gravitational waves with classic PPN tests of gravity. Diagrams of this gravitational parameter space reveal a noticeable untested regime. The untested window, which separates small-scale systems from the troubled cosmological regime, could potentially hide the onset of corrections to General Relativity.

The idea is to find a simple way of characterizing different tests of GR so that they can be directly compared. This will always be something of an art as well as a science — the metric tensor has ten independent parameters (six of which are physical, given four coordinates we can choose), and there are a lot of ways they can combine together, so there’s little hope of a parameterization that is both easy to grasp and covers all bases.

Still, you can make some reasonable assumptions and see whether you make progress. Baker *et al.* have defined two parameters: the “Potential” ε, which roughly tells you how deep the gravitational well is, and the “Curvature” ξ, which tells you how strongly the field is changing through space. Again — these are reasonable things to look at, but not really comprehensive. Nevertheless, you can make a nice plot that shows where different experimental constraints lie in your new parameter space.

The nice thing is that there’s a lot of parameter space that is unexplored! You can think of this plot as a finding chart for experimenters who want to dream up new ways to test our best understanding of gravity in new regimes.

One caveat: it would be extremely surprising indeed if gravity didn’t conform to GR in these regimes. The philosophy of effective field theory gives us a very definite expectation for where our theories should break down: on length scales *shorter* than where we have tested the theory. It would be weird, although certainly not impossible, for a theory of gravity to work with exquisite precision in our Solar System, but break down on the scales of galaxies or cosmology. It’s not impossible, but that fact should weigh heavily in one’s personal Bayesian priors for finding new physics in this kind of regime. Just another way that Nature makes life challenging for we poor human physicists.

As Dennis Overbye reports in the NYT, the Einstein Papers Project has so far released 14 of a projected 30 volumes of thick, leather-bound collections of Einstein’s works, as well as companion English translations in paperback. That’s less than half, but it does cover the years 1903-1917 when Einstein was turning physics on its head. You can read On the Electrodynamics of Moving Bodies, where special relativity was introduced in full, or the very short (3 pages!) follow-up Does the Inertia of a Body Depend on Its Energy Content?, where he derived the relation that we would now write as E = mc^{2}. Interestingly, most of Einstein’s earliest papers were on statistical mechanics and the foundations of thermodynamics.

Ten years later he is putting the final touches on general relativity, whose centennial we will be celebrating next year. This masterwork took longer to develop, and Einstein crept up on its final formulation gradually, so you see the development spread out over a number of papers, achieving its ultimate form in The Field Equations of Gravitation in 1915.

What a compelling writer Einstein was! (Not all great scientists are.) Here is the opening of one foundational paper from 1914, The Formal Foundation of the General Theory of Relativity:

In recent years I have worked, in part together with my friend Grossman, on a generalization of the theory of relativity. During these investigations, a kaleidoscopic mixture of postulates from physics and mathematics has been introduced and used as heuristical tools; as a consequence it is not easy to see through and characterize the theory from a formal mathematical point of view, that is, only based on these papers. The primary objective of the present paper is to close this gap. In particular, it has been possible to obtain the equations of the gravitational field in a purely covariance-theoretical manner (section D). I also tried to give simple derivations of the basic laws of absolute differential calculus — in part, they are probably new ones (section B) — in order to allow the reader to get a complete grasp of the theory without having to read other, purely mathematical tracts. As an illustration of the mathematical methods, I derived the (Eulerian) equations of hydrodynamics and the field equations of the electrodynamics of moving bodies (section C). Section E shows that Newton’s theory of gravitation follows from the general theory as an approximation. The most elementary features of the present theory are also derived inasfar as they are characteristic of a Newtonian (static) gravitational field (curvature of light rays, shift of spectral lines).

While Einstein certainly did have help from Grossman and others, to a large extent the theory of general relativity was all his own. It stands in stark contrast to quantum mechanics or almost all modern theories, which have grown up through the collaborative effort of many smart people. We may never again in physics see a paragraph of such sweep and majesty — “Here is my revolutionary theory of the dynamics of space and time, along with a helpful introduction to its mathematical underpinnings, as well as derivations of all the previous laws of physics within this powerful new framework.”

Thanks to everyone at the Einstein Papers project for undertaking this enormous task.

]]>Let’s say you want to locate a point in space — for simplicity, on a two-dimensional plane. You could choose a coordinate system (*x, y*), and then specify the values of those coordinates to pick out your point: (*x, y*) = (1, 3).

But someone else might want to locate the same point, but they want to use a different coordinate system. That’s fine; points are real, but coordinate systems are just convenient fictions. So your friend uses coordinates (*u, v*) instead of (*x, y*). Fortunately, you know the relationship between the two systems: in this case, it’s *u = y+x*, *v = y-x*. The new coordinates are rotated (and scaled) with respect to the old ones, and now the point is represented as (*u, v*) = (4, 2).

Fourier transforms are just a fancy version of changes of coordinates. The difference is that, instead of coordinates on a two-dimensional space, we’re talking about coordinates on an *infinite*-dimensional space: the space of all functions. (And for technical reasons, Fourier transforms naturally live in the world of complex functions, where the value of the function at any point is a complex number.)

Think of it this way. To specify some function *f*(*x*), we give the value of the function *f* for every value of the variable *x*. In principle, an infinite number of numbers. But deep down, it’s not that different from giving the location of our point in the plane, which was just two numbers. We can certainly imagine taking the information contained in *f*(*x*) and expressing it in a different way, by “rotating the axes.”

That’s what a Fourier transform is. It’s a way of specifying a function that, instead of telling you the value of the function at each point, tells you the *amount of variation at each wavelength*. Just as we have a formula for switching between (*u, v*) and (*x, y*), there are formulas for switching between a function *f*(*x*) and its Fourier transform *f*(*ω*):

.

Absorbing those formulas isn’t necessary to get the basic idea. If the function itself looks like a sine wave, it has a specific wavelength, and the Fourier transform is just a delta function (infinity at that particular wavelength, zero everywhere else). If the function is periodic but a bit more complicated, it might have just a few Fourier components.

In general, the Fourier transform *f*(*ω*) gives you “the amount of the original function that is periodic with period 2πω.” This is sometimes called the “frequency domain,” since there are obvious applications to signal processing, where we might want to take a signal that has an intensity that varies with time and pick out the relative strength of different frequencies. (Your eyes and ears do this automatically, when they decompose light into colors and sound into pitches. They’re just taking Fourier transforms.) Frequency, of course, is the inverse of wavelength, so it’s equally good to think of the Fourier transform as describing the “length domain.” A cosmologist who studies the large-scale distribution of galaxies will naturally take the Fourier transform of their positions to construct the power spectrum, revealing how much structure there is at different scales.

To my (biased) way of thinking, where Fourier transforms really come into their own is in quantum field theory. QFT tells us that the world is fundamentally made of waves, not particles, and it is extremely convenient to think about those waves by taking their Fourier transforms. (It is literally one of the first things one is told to do in any introduction to QFT.)

But it’s not just convenient, it’s a worldview-changing move. One way of characterizing Ken Wilson’s momentous achievement is to say “physics is organized by length scale.” Phenomena at high masses or energies are associated with short wavelengths, where our low-energy long-wavelength instruments cannot probe. (We need giant machines like the Large Hadron Collider to create high energies, because what we are really curious about are short distances.) But we can construct a perfectly good effective theory of just the wavelengths longer than a certain size — whatever size it is that our theoretical picture can describe. As physics progresses, we bring smaller and smaller length scales under the umbrella of our understanding.

Without Fourier transforms, this entire way of thinking would be inaccessible. We should be very thankful for them — as long as we use them wisely.

Note that Joseph Fourier, inventor of the transform, is not the same as Charles Fourier, utopian philosopher. Joseph, in addition to his work in math and physics, invented the idea of the greenhouse effect. Sadly that’s not something we should be thankful for right now.

]]>Omid Kokabee was arrested at the airport of Teheran in January 2011, just before taking a flight back to the University of Texas at Austin, after spending the winter break with his family. He was accused of communicating with a hostile government and after a trial, in which he was denied contact with a lawyer, he was sentenced to 10 years in Teheran’s Evin prison.

According to a letter written by Omid Kokabee, he was asked to work on classified research, and his arrest and detention was a consequence of his refusal. Since his detention, Kokabee has continued to assert his innocence, claiming that several human rights violations affected his interrogation and trial.

Since 2011, we, the Committee on International Freedom of Scientists (CIFS) of the American Physical Society, have protested the imprisonment of Omid Kokabee. Although this case has received continuous support from several scientific and international human rights organizations, the government of Iran has refused to release Kokabee.

Omid Kokabee has received two prestigious awards:

- The American Physical Society awarded him Andrei Sakharov Prize “For his courage in refusing to use his physics knowledge to work on projects that he deemed harmful to humanity, in the face of extreme physical and psychological pressure.”
- The American Association for the Advancement of Science awarded Kokabee the Scientific Freedom and Responsibility Prize.

Amnesty International (AI) considers Kokabee a prisoner of conscience and has requested his immediate release.

Recently, the Committee of Concerned Scientists (CCS), AI and CIFS, have prepared a letter addressed to the Iranian Supreme Leader Ali Khamenei asking that Omid Kokabee be released immediately. The letter was signed by 31 Nobel-prize laureates. (An additional 13 Nobel Laureates have signed this letter since the *Nature* blog post. See also this update from APS.)

Unfortunately, earlier last month, Kokabee’s health conditions have deteriorated and he has been denied proper medical care. In response, the President of APS, Malcolm Beasley, has written a letter to the Iranian President Rouhani calling for a medical furlough for Omid Kokabee so that he can receive proper medical treatment. AI has also made further steps and has requested urgent medical care for Kokabee.

Very recently, the Iran’s supreme court has nullified the original conviction of Omid Kokabee and has agreed to reconsider the case. Although this is positive news, it is not clear when the new trial will start. Considering Kokabee’s health conditions, it is very important that he is granted a medical furlough as soon as possible.

More public engagement and awareness is needed to solve this unacceptable case of violation of human rights and freedom of scientific research. You can help by tweeting/blogging about it and responding to this Urgent Action that AI has issued. Please note that the date on the Urgent Action is there to create an avalanche effect; it is not a deadline nor it is the end of action.

Alessandra Buonanno for the American Physical Society’s Committee on International Freedom of Scientists (CIFS).

]]>