Category: Science

  • Einstein and Pi

    Each year, the 14th of March is celebrated by scientifically-minded folks for two good reasons. First, it’s Einstein’s birthday (happy 135th, Albert!). Second, it’s Pi Day, because 3/14 is the closest calendrical approximation we have to the decimal expansion of pi, π =3.1415927….

    Both of these features — Einstein and pi — are loosely related by playing important roles in science and mathematics. But is there any closer connection?

    Of course there is. We need look no further than Einstein’s equation. I mean Einstein’s real equation — not E=mc2, which is perfectly fine as far as it goes, but a pretty straightforward consequence of special relativity rather than a world-foundational relationship in its own right. Einstein’s real equation is what you would find if you looked up “Einstein’s equation” in the index of any good GR textbook: the field equation relating the curvature of spacetime to energy sources, which serves as the bedrock principle of general relativity. It looks like this:

    einstein-eq

    It can look intimidating if the notation is unfamiliar, but conceptually it’s quite simple; if you don’t know all the symbols, think of it as a little poem in a foreign language. In words it is saying this:

    (gravity) = 8 π G × (energy and momentum).

    Not so scary, is it? The amount of gravity is proportional to the amount of energy and momentum, with the constant of proportionality given by 8πG, where G is a numerical constant.

    Hey, what is π doing there? It seems a bit gratuitous, actually. Einstein could easily have defined a new constant H simply be setting H=8πG. Then he wouldn’t have needed that superfluous 8π cluttering up his equation. Did he just have a special love for π, perhaps based on his birthday?

    The real story is less whimsical, but more interesting. Einstein didn’t feel like inventing a new constant because G was already in existence: it’s Newton’s constant of gravitation, which makes perfect sense. General relativity (GR) is the theory that replaces Newton’s version of gravitation, but at the end of the day it’s still gravity, and it has the same strength that it always did.

    So the real question is, why does π make an appearance when we make the transition from Newtonian gravity to general relativity?

    Well, here’s Newton’s equation for gravity, the famous inverse square law:

    inverse-square

    It’s actually similar in structure to Einstein’s equation: the left hand side is the force of gravity between two objects, and on the right we find the masses m1 and m2 of the objects in question, as well as the constant of proportionality G. (For Newton, mass was the source of gravity; Einstein figured out that mass is just one form of energy, and upgraded the source of gravity to all forms of energy and momentum.) And of course we divide by the square of the distance r between the two objects. No π’s anywhere to be found.

    It’s a great equation, as physics equations go; one of the most influential in the history of science. But it’s also a bit puzzling, at least philosophically. It tells a story of action at a distance — two objects exert a gravitational force on each other from far away, without any intervening substance. Newton himself considered this to be an unacceptable state of affairs, although he didn’t really have a good answer:

    That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro’ a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it.

    But there is an answer to this conundrum. It’s to shift one’s focus from the force of gravity, F, to the gravitational potential field, Φ (Greek letter “phi”), from which the force can be derived. The field Φ fills all of space, taking some specific value at every point. In the vicinity of a single body of mass M, the gravitational potential field is given by this equation:

    grav-potential

    This equation bears a close resemblance to Newton’s original one. It depends inversely on the distance, rather than the distance squared, because it’s not the gravitational force directly; the force is given by the derivative (slope) of the field, which turns 1/r into 1/r2.

    That’s nice, since we’ve replaced the spookiness of action at a distance with the pleasantly mechanical notion of a field filling all of space. Still no π’s, though.

    But our equation only tells us what happens when we have a single body with mass M. What if we have many objects, each creating its own gravitational field, or for that matter a gas or fluid spread throughout some region? Then we need to talk about the mass density, or the amount of mass per each little volume of space, conventionally denoted ρ (Greek letter “rho”). And indeed there is an equation that relates the gravitational potential field to an arbitrary mass density spread throughout space, known as Poisson’s equation:

    poisson-eq

    The upside-down triangle is the gradient operator (here squared to make the Laplacian); it’s a fancy three-dimensional way of saying how the field is changing through space (its vectorial derivative). But even more exciting, π has now appeared on the right-hand side! Why is that?

    There is a technical mathematical explanation, of course, but here is the rough physical explanation. Whereas we were originally concerned (in Newton’s equation or the first equation for Φ) with the gravitational effect of a single body at a distance r, we’re now adding up all the accumulated effects of everything in the universe. That “adding up” (integrating) can be broken into two steps: (1) add up all the effects at some fixed distance r, and (2) add up the effects from all distances. In that first step, all the points at some distance r from any fixed location define a sphere centered on that location. So we’re really adding up effects spread over the area of a sphere. And the formula for the area of a sphere, of course, is:

    area-sphere

    Seems almost too trivial, but that’s really the answer. The reason π comes into Poisson’s equation and not Newton’s is that Newton cared about the force between two specific objects, while Poisson tells us how to calculate the potential as a function of a matter density spread all over the place, and in three dimensions “all over the place” means “all over the area of a sphere” and then “adding up each sphere.” (We add up spheres, rather than cubes or whatever, because spheres describe fixed distances from the point of interest, and gravity depends on distance.) And the area of a sphere, just like the circumference of a circle, is proportional to π.

    isq

    So then what about Einstein? Back in Newtonian gravity, it was often convenient to use the gravitational potential field, but it wasn’t really necessary; you could always in principle calculate the gravitational force directly. But when Einstein formulated general relativity, the field concept became absolutely central. The thing one calculates is not the force due to gravity (indeed, there’s a sense in which gravity isn’t really a “force” in general relativity), but rather the geometry of spacetime. That is fixed by the metric tensor field, a complicated beast that includes as a subset what we call the gravitational potential field. Einstein’s equation is directly analogous to Poisson’s equation, not to Newton’s.

    So that’s the Einstein-Pi connection. Einstein figured out that gravity is best described by a field theory rather than as a direct interaction between individual bodies, and connecting fields to localized bodies involves integrating over the surface of a sphere, and the area of a sphere is proportional to π. The whole birthday thing is just a happy accident.

  • A Bit of Physics History: Ed Witten Introduces M-Theory

    The Second Superstring Revolution was, like most revolutions, a somewhat messy affair, with a number of pivotal steps along the way: understanding the role of membranes in 11-dimensional supergravity, the discovery of dualities in supersymmetric gauge theories, Polchinski’s appreciation of D-branes as dynamical extended objects in string theory, and of course Maldacena’s formulation of the AdS/CFT correspondence. But perhaps the high point was Ed Witten’s formulation of M-Theory in 1995. And I just noticed that Witten sharing it with the world was captured on video.

    Here is Witten’s paper:

    String Theory Dynamics In Various Dimensions
    Edward Witten

    The strong coupling dynamics of string theories in dimension d≥4 are studied. It is argued, among other things, that eleven-dimensional supergravity arises as a low energy limit of the ten-dimensional Type IIA superstring, and that a recently conjectured duality between the heterotic string and Type IIA superstrings controls the strong coupling dynamics of the heterotic string in five, six, and seven dimensions and implies S duality for both heterotic and Type II strings.

    Before this result, we knew about five different kinds of string theory, each living in ten dimensions: Type I, two different Type II’s, and two different “heterotic” theories. Then there was the most symmetric form of supergravity, living in 11 dimensions, which some people thought was interesting but others thought was a curiosity that had been superseded by string theory. To everyone’s amazement, Witten showed that all of these theories are simply different limiting cases of a single underlying structure. Nobody knows what that underlying theory really is (although there are a few different formulations that work in some contexts), but we know what to call it: M-theory.

    mtheory

    Now Amanda Gefter, author of the new book Trespassing on Einstein’s Lawn (and a recent guest-blogger at Cocktail Party Physics), takes to Twitter to point out something I wasn’t aware of: a video record of Witten’s famous 1995 talk at USC. (I’m pretty sure this is the celebrated talk, but my confidence isn’t 100%.) [Update: folks who should know are actually saying it might be a seminar soon thereafter at Stony Brook. Witten himself admits that he’s not sure.] It’s clearly a recording by someone in the audience, but I don’t know who.

    Most physics seminars are, shall we say, not all that historically exciting. But this one was recognized right away as something special. I was a postdoc at MIT at the time, and not in the audience myself, but I remember distinctly how the people who were there were buzzing about it when they returned home.

    Nature giveth, and Nature taketh away. The 1995 discovery of M-theory made string theory seem more promising than ever, to the extent that just a single theory, rather than five or six. Then the 1998 discovery that the universe is accelerating made people take more seriously the idea that there might be more than one way to compactify those extra dimensions down to the four we observe — and once you have more than one, you sadly end up with a preposterously high number (the string theory landscape). So even if there is only one unifying theory of everything, there seem to be a bajillion phases it can be in, which creates an enormous difficulty in trying to relate M-theory to reality. But we won’t know unless we try, will we?

  • Guest Post: Katherine Freese on Dark Matter Developments

    Katherine Freese The hunt for dark matter has been heating up once again, driven (as usual) by tantalizing experimental hints. This time the hints are coming mainly from outer space rather than underground laboratories, which makes them harder to check independently, but there’s a chance something real is going on. We need more data to be sure, as scientists have been saying since the time Eratosthenes measured the circumference of the Earth.

    As I mentioned briefly last week, Katherine Freese of the University of Michigan has a new book coming out, The Cosmic Cocktail, that deals precisely with the mysteries of dark matter. Katie was also recently at the UCLA Dark Matter Meeting, and has agreed to share some of her impressions with us. (She also insisted on using the photo on the right, as a way of reminding us that this is supposed to be fun.)


    Dark Matter Everywhere (at the biannual UCLA Dark Matter Meeting)

    The UCLA Dark Matter Meeting is my favorite meeting, period. It takes place every other year, usually at the Marriott Marina del Rey right near Venice Beach, but this year on UCLA campus. Last week almost two hundred people congregated, both theorists and experimentalists, to discuss our latest attempts to solve the dark matter problem. Most of the mass in galaxies, including our Milky Way, is not comprised of ordinary atomic material, but instead of as yet unidentified dark matter. The goal of dark matter hunters is to resolve this puzzle. Experimentalist Dave Cline of the UCLA Physics Department runs the dark matter meeting, with talks often running from dawn till midnight. Every session goes way over, but somehow the disorganization leads everybody to have lots of discussion, interaction between theorists and experimentalists, and even more cocktails. It is, quite simply, the best meeting. I am usually on the organizing committee, and cannot resist sending in lots of names of people who will give great talks and add to the fun.

    Last week at the meeting we were treated to multiple hints of potential dark matter signals. To me the most interesting were the talks by Dan Hooper and Tim Linden on the observations of excess high-energy photons — gamma-rays — coming from the Central Milky Way, possibly produced by annihilating WIMP dark matter particles. (See this arxiv paper.) Weakly Interacting Massive Particles (WIMPs) are to my mind the best dark matter candidates. Since they are their own antiparticles, they annihilate among themselves whenever they encounter one another. The Center of the Milky Way has a large concentration of dark matter, so that a lot of this annihilation could be going on. The end products of the annihilation would include exactly the gamma-rays found by Hooper and his collaborators. They searched the data from the FERMI satellite, the premier gamma-ray mission (funded by NASA and DoE as well as various European agencies), for hints of excess gamma-rays. They found a clear excess extending to about 10 angular degrees from the Galactic Center. This excess could be caused by WIMPs weighing about 30 GeV, or 30 proton masses. Their paper called these results “a compelling case for annihilating dark matter.” After the talk, Dave Cline decided to put out a press release from the meeting, and asked the opinion of us organizers. Most significantly, Elliott Bloom, a leader of the FERMI satellite that obtained the data, had no objection, though the FERMI team itself has as yet issued no statement.

    Many putative dark matter signals have come and gone, and we will have to see if this one holds up. Two years ago the 130 GeV line was all the rage — gamma-rays of 130 GeV energy that were tentatively observed in the FERMI data towards the Galactic Center. (Slides from Andrea Albert’s talk.) This line, originally proposed by Stockholm’s Lars Bergstrom, would have been the expectation if two WIMPs annihilated directly to photons. People puzzled over some anomalies of the data, but with improved statistics there isn’t much evidence left for the line. The question is, will the 30 GeV WIMP suffer the same fate? As further data come in from the FERMI satellite we will find out.

    What about direct detection of WIMPs? Laboratory experiments deep underground, in abandoned mines or underneath mountains, have been searching for direct signals of astrophysical WIMPs striking nuclei in the detectors. At the meeting the SuperCDMS experiment hammered on light WIMP dark matter with negative results. The possibility of light dark matter, that was so popular recently, remains puzzling. 10 GeV dark matter seemed to be detected in many underground laboratory experiments: DAMA, CoGeNT, CRESST, and in April 2013 even CDMS in their silicon detectors. Yet other experiments, XENON and LUX, saw no events, in drastic tension with the positive signals. (I told Rick Gaitskell, a leader of the LUX experiment, that I was very unhappy with him for these results, but as he pointed out, we can’t argue with nature.) Last week at the conference, SuperCMDS, the most recent incarnation of the CDMS experiment, looked to much lower energies and again saw nothing. (Slides from Lauren Hsu’s talk.) The question remains: are we comparing apples and oranges? These detectors are made of a wide variety of types of nuclei and we don’t know how to relate the results. Wick Haxton’s talk surprised me by discussion of nuclear physics uncertainties I hadn’t been aware of, that in principle could reconcile all the disagreements between experiments, even DAMA and LUX. Most people think that the experimental claims of 10 GeV dark matter are wrong, but I am taking a wait and see attitude.

    We also heard about the hints of detection of a completely different dark matter candidate: sterile neutrinos. (Slides from George Fuller’s talk.) In addition to the three known neutrinos of the Standard Model of Particle Physics, there could be another one that doesn’t interact with the standard model. Yet its decay could lead to x-ray lines. Two separate groups found indications of lines in data from the Chandra and XMM-Newton space satellites that would be consistent with a 7 keV neutrino (7 millionths of a proton mass). Could it be that there is more than one type of dark matter particle? Sure, why not?

    On the last evening of the meeting, a number of us went to the Baja Cantina, our favorite spot for margaritas. Rick Gaitskell was smart: he talked us into the $60.00 pitchers, high enough quality that the 6AM alarm clocks the next day (that got many of us out of bed and headed to flights leaving from LAX) didn’t kill us completely. We have such a fun community of dark matter enthusiasts. May we find the stuff soon!

  • Effective Field Theory and Large-Scale Structure

    Been falling behind on my favorite thing to do on the blog: post summaries of my own research papers. Back in October I submitted a paper with two Caltech colleagues, postdoc Stefan Leichenauer and grad student Jason Pollack, on the intriguing intersection of effective field theory (EFT) and cosmological large-scale structure (LSS). Now’s a good time to bring it up, as there’s a great popular-level discussion of the idea by Natalie Wolchover in Quanta.

    So what is the connection between EFT and LSS? An effective field theory, as loyal readers know, an “effective field theory” is a way to describe what happens at low energies (or, equivalently, long wavelengths) without having a complete picture of what’s going on at higher energies. In particle physics, we can calculate processes in the Standard Model perfectly well without having a complete picture of grand unification or quantum gravity. It’s not that higher energies are unimportant, it’s just that all of their effects on low-energy physics can be summed up in their contributions to just a handful of measurable parameters.

    In cosmology, we consider the evolution of LSS from tiny perturbations at early times to the splendor of galaxies and clusters that we see today. It’s really a story of particles — photons, atoms, dark matter particles — more than a field theory (although of course there’s an even deeper description in which everything is a field theory, but that’s far removed from cosmology). So the right tool is the Boltzmann equation — not the entropy formula that appears on his tombstone, but the equation that tells us how a distribution of particles evolves in phase space. However, the number of particles in the universe is very large indeed, so it’s the most obvious thing in the world to make an approximation by “smoothing” the particle distribution into an effective fluid. That fluid has a density and a velocity, but also has parameters like an effective speed of sound and viscosity. As Leonardo Senatore, one of the pioneers of this approach, says in Quanta, the viscosity of the universe is approximately equal to that of chocolate syrup.

    So the goal of the EFT of LSS program (which is still in its infancy, although there is an important prehistory) is to derive the correct theory of the effective cosmological fluid. That is, to determine how all of the complicated churning dynamics at the scales of galaxies and clusters feeds back onto what happens at larger distances where things are relatively smooth and well-behaved. It turns out that this is more than a fun thing for theorists to spend their time with; getting the EFT right lets us describe what happens even at some length scales that are formally “nonlinear,” and therefore would conventionally be thought of as inaccessible to anything but numerical simulations. I really think it’s the way forward for comparing theoretical predictions to the wave of precision data we are blessed with in cosmology.

    Here is the abstract for the paper I wrote with Stefan and Jason:

    A Consistent Effective Theory of Long-Wavelength Cosmological Perturbations
    Sean M. Carroll, Stefan Leichenauer, Jason Pollack

    Effective field theory provides a perturbative framework to study the evolution of cosmological large-scale structure. We investigate the underpinnings of this approach, and suggest new ways to compute correlation functions of cosmological observables. We find that, in contrast with quantum field theory, the appropriate effective theory of classical cosmological perturbations involves interactions that are nonlocal in time. We describe an alternative to the usual approach of smoothing the perturbations, based on a path-integral formulation of the renormalization group equations. This technique allows for improved handling of short-distance modes that are perturbatively generated by long-distance interactions.

    As useful as the EFT of LSS approach is, our own contribution is mostly on the formalism side of things. (You will search in vain for any nice plots comparing predictions to data in our paper — but do check out the references.) We try to be especially careful in establishing the foundations of the approach, and along the way we show that it’s not really a “field” theory in the conventional sense, as there are interactions that are nonlocal in time (a result also found by Carrasco, Foreman, Green, and Senatore). This is a formal worry, but doesn’t necessarily mean that the theory is badly behaved; one just has to work a bit to understand the time-dependence of the effective coupling constants.

    Here is a video from a physics colloquium I gave at NYU on our paper. A colloquium is intermediate in level between a public talk and a technical seminar, so there are some heavy equations at the end but the beginning is pretty motivational. Enjoy!

    Colloquium October 24th, 2013 — Effective Field Theory and Cosmological Large-Scale Structure

  • Dept. of Energy Support for Particle Theory: A “Calamity”

    One of the nice things that governments do is support basic scientific research — work that might help us better understand how the world works, but doesn’t have any direct technological or economic application. Particle physics and cosmology are great examples. In the U.S., much of the funding for these fields comes from the Office of High Energy Physics within the Office of Science at the Department of Energy (DOE).

    Now that support is crumbling — drastically. In the last couple of years, the DOE has radically changed how it carries out reviews of different university theory groups, to decide how much grant support each will get. All for ostensibly good reasons — leveling the playing field and all that. But, without much fanfare, the actual result has been a significant drop in funding for almost every major theory group in the country.

    Laurence Yaffe of the University of Washington, a respected particle and nuclear theorist, just released an analysis he informally carried out after serving a temporary assignment at the DOE. Here is his abstract (emphasis mine):

    Impacts of Recent Comparative Review Cycles on DOE-funded High Energy Theory
    L.G. Yaffe, University of Washington
    February 19, 2014

    A summary is presented of data obtained from a grass-roots effort to understand the effects of the FY13 and FY14 comparative review cycles on the DOE-funded portion of the US high energy theory community and, in particular, on graduate students and postdoctoral researchers who are beginning their careers. For a sample comprised of nearly all of the larger groups undergoing comparative review, total funding declined by an average of 23%, with numerous major groups receiving reductions in the 30–55% range. Funding available for postdoc or graduate student support declined over 30%, with many reductions in the 40–65% range. The total number of postdoc positions in this large sample of theory groups is declining by over 40%. The impacts on young researchers raise grave concerns regarding continued U.S. leadership in high energy theory.

    A 20% cut in funding in one year is kind of a big deal. A picture is worth a thousand words, so here are two of them; overall funding changes for all the different groups:

    calamity1

    and changes specifically in support for graduate students and postdocs:

    calamity2

    Obviously this is unsustainable, unless as a society we make the decision that particle physics just isn’t worth doing. But hopefully things can be rectified at least a bit, to restore some of that money. Everyone I know is bemoaning the cuts, complaining that they have been turning away prospective grad students and postdocs more than ever before. I’m not necessarily against decreasing the number of postdocs (as opposed to grad students); the pipeline has to narrow somewhere, and there’s a sensible argument to be made to do it at that point. But we should do it deliberately and after thinking and talking about it, not as the haphazard result of some new bureaucratic procedures. It would be a shame to destroy our future prospects in this centrally important area of science.

  • The Many Worlds of Quantum Mechanics

    Greetings from Sihanoukville, Cambodia, or at least the waters immediately off. I’m here as part of Bright Horizons 19, a two-week cruise on the Holland American ship Vollendam, in collaboration with Scientific American. We started in Hong Kong and have been working our way south, stopping a few times in Vietnam, and after this we’ll briefly visit Thailand before finishing in Singapore. A fascinating, once-in-a-lifetime experience, even if two weeks is an amount of time I can’t honestly afford to be taking off. Been getting a touch of work done here and there, but not as much as I would have liked, in between dashes ashore to sample the local cuisine. Although the local cuisine has been pretty spectacular, I have to admit.

    My job here is to give a few talks about physics and cosmology to the folks who signed up for the package — a public audience, but the kind of people whose idea of a good time while sailing the South China Sea is hearing talks about molecular biology or world history. Mostly my talks are variations of themes I’ve spoken on frequently before — the Higgs boson, the arrow of time, dark matter and dark energy. But to spice things up I decided to throw in something new, so I wrote up a talk on The Many Worlds of Quantum Mechanics.

    And here it is — the slides, at least. The content is roughly based on my explanation in From Eternity to Here, with a few improvements thrown in.

    Two basic goals here. One is to introduce QM to people who don’t know much more about it than a vague notion of “uncertainty” or “fluctuations.” And in particular, to focus on the conceptual foundations, rather than any of the other perfectly legitimate angles one could take: the historical development, the calculational basics, the experimental evidence, the role in modern technology, and so on. Hey, it’s my talk, I might as well concentrate on the parts I’m most fascinated by. So there’s a discussion of entanglement and decoherence that is a bit more specific and detailed than one would often get in a talk of this type, even if it is enlivened by silly pictures of cats and dogs.

    The second goal was to give a subtle sales pitch for the Many-Worlds interpretation. Really more damage control than full-on hard sell; the very idea of many worlds is so crazy-sounding and counterintuitive that my job is more to let people know that it’s actually quite a natural implication of the formalism, rather than a bit of ad hoc nonsense tacked on by theorists who have become unmoored from reality. I’m happy to bring up the outstanding issues with the approach, but I do want people to know it should be taken seriously.

    Comments welcome, especially since I’ve never tried this approach in a talk before. Of course by only seeing the slides you miss all the witty asides, but the basic substance should come through.

  • Reality, Pushed From Behind

    Teleology” is a naughty word in certain circles — largely the circles that I often move in myself, namely physicists or other scientists who know what the word “teleology” means. To wit, it’s the concept of “being directed toward a goal.” In the good old days of Aristotle, our best understanding of the world was teleological from start to finish: acorns existed in order to grow into mighty oak trees; heavy objects wanted to fall and light objects to rise; human beings strove to fulfill their capacity as rational beings. Not everyone agreed, including my buddy Lucretius, but at the time it was a perfectly sensible view of the world.

    These days we know better, though the knowledge has been hard-won. The early glimmerings of the notion of conservation of momentum supported the idea that things just kept happening, rather than being directed toward a cause, and this view seemed to find its ultimate embodiment in the clockwork universe of Newtonian mechanics. (In technical terms, time evolution is described by differential equations fixed by initial data, not by future goals.) Darwin showed how the splendid variety of biological life could arise without being in any sense goal-directed or guided — although this obviously remains a bone of contention among religious people, even respectable philosophers. But the dominant paradigm among scientists and philosophers is dysteleological physicalism.

    However. Aristotle was a smart cookie, and dismissing him as an outdated relic is always a bad idea. Sure, maybe the underlying laws of nature are dysteleological, but surely there’s some useful sense in which macroscopic real-world systems can be usefully described using teleological language, even if it’s only approximate or limited in scope. (Here’s where I like to paraphrase Scott Derrickson: The universe has purposes. I know this because I am part of the universe, and I have purposes.) It’s okay, I think, to say things like “predators tend to have sharp teeth because it helps them kill and eat prey,” even if we understand that those causes are merely local and contingent, not transcendent. Stephen Asma defends this kind of view in an interesting recent article, although I would like to see more acknowledgement made of the effort required to connect the purposeless, mechanical underpinnings of the world to the purposeful, macroscopic biosphere. Such a connection can be made, but it requires some effort.

    Of course loyal readers all know where such a connection comes from: it’s the arrow of time. The underlying laws of physics don’t work in terms of any particular “pull” toward future goals, but the specific trajectory of our actual universe looks very different in the past than in the future. In particular, the past had a low entropy: we can reconcile the directedness of macroscopic time evolution with the indifference of microscopic dynamics by positing some sort of Past Hypothesis (see also). All of the ways in which physical objects behave differently toward the future than toward the past can ultimately be traced to the thermodynamic arrow of time.

    Which raises an interesting point that I don’t think is sufficiently appreciated: we now know enough about the real behavior of the physical world to understand that what looks to us like teleological behavior is actually, deep down, not determined by any goals in the future, but fixed by a boundary condition in the past. So while “teleological” might be acceptable as a rough macroscopic descriptor, a more precise characterization would say that we are being pushed from behind, not pulled from ahead.

    The question is, what do we call such a way of thinking? Apparently “teleology” is a word never actually used by Aristotle, but invented in the eighteenth century based on the Greek télos, meaning “end.” So perhaps what we want is an equivalent term, with “end” replaced by “beginning.” I know exactly zero ancient Greek, but from what I can glean from the internet there is an obvious choice: arche is the Greek word for beginning or origin. Sadly, “archeology” is already taken to mean something completely different, so we can’t use it.

    I therefore tentatively propose the word aphormeology to mean “originating from a condition in the past,” in contrast with teleology, “driven toward a goal in the future.” (Amazingly, a Google search for this word on 3 February 2014 returns precisely zero hits.) Remember — no knowledge of ancient Greek, but apparently aphorme means “a base of operations, a place from which a campaign is launched.” Which is not a terribly bad way of describing the cosmological Past Hypothesis when you think about it. (Better suggestions would be welcome, especially from anyone who actually knows Greek.)

    We live in a world where the dynamical laws are fundamentally dysteleological, but our cosmic history is aphormeological, which through the magic of statistical mechanics gives rise to the appearance of teleology in our macroscopic environment. A shame Aristotle and Lucretius aren’t around to appreciate the progress we’ve made.

  • Searching for the Science of Self

    memywhy_cover Book release day! Not by me — I’ve gone on quasi-hiatus from book-writing, and for that matter from blogging, while I am happily getting some actual science done. But the brilliant and talented Jennifer Ouellette has come out with her best book yet — Me, Myself, and Why: Searching for the Science of Self.

    Jennifer’s last book was The Calculus Diaries: How Math Can Help You Lose Weight, Win in Vegas, and Survive a Zombie Apocalypse. The idea behind that one stemmed from her conviction that, despite having been an English major who did badly in math, it was important that she learn the basics of calculus in order to appreciate the way it alters how we experience the world. But in doing the research for that book, she discovered something surprising: according to her high-school transcripts, she hadn’t done badly in math at all. In fact she got all A’s. But she left school with a conviction that she was bad in math.

    Where did that conviction come from? Was it society’s fault, man? Or did it come from her parents? And since she was adopted, which set of parents should be blamed? Clearly there was an science question here: what were the crucial influences that made her the person she eventually became?

    Thus, the new book. Here Jennifer traces various scientific strands that weave together to make us the people that we are. Starting with some of the obvious strategies — genome sequencing, brain scans — and working up to some more (literally) brain-bending ideas about sexual identity, addiction, virtual reality, and the origins of consciousness.

    Me, Myself and Why (book trailer)

    Jennifer even convinced her innocent, straight-arrow husband to experiment briefly with hallucinogenic substances, in order to better understand how a temporary alteration in brain chemistry affects the self/other boundary. See Chapter Seven for the scandalous details.

  • What Scientific Ideas Are Ready for Retirement?

    Every year we look forward to the Edge Annual Question, and as usual it’s a provocative one: “What scientific idea is ready for retirement?” Part of me agrees with Ian McEwan’s answer, which is to unask the question, and argue that nothing should be retired. Unasking is almost always the right response to questions that beg other questions, but there’s also an argument to be made in favor of playing along, so that’s what I did.

    My answer was “Falsifiability.” More of a philosophical idea than a scientific one, but an idea that is bandied about by lazy scientists far more than it is invoked by careful philosophers. Thinking sensibly about the demarcation problem between science and non-science, especially these days, requires a bit more nuance than that.

    Modern physics stretches into realms far removed from everyday experience, and sometimes the connection to experiment becomes tenuous at best. String theory and other approaches to quantum gravity involve phenomena that are likely to manifest themselves only at energies enormously higher than anything we have access to here on Earth. The cosmological multiverse and the many-worlds interpretation of quantum mechanics posit other realms that are impossible for us to access directly. Some scientists, leaning on Popper, have suggested that these theories are non-scientific because they are not falsifiable.

    The truth is the opposite. Whether or not we can observe them directly, the entities involved in these theories are either real or they are not. Refusing to contemplate their possible existence on the grounds of some a priori principle, even though they might play a crucial role in how the world works, is as non-scientific as it gets.

    I’m also partial to Alan Guth’s answer: “The universe began in a low-entropy state.” Of course we all know that our observable universe had a relatively low entropy at the Big Bang; Alan is making the point that the observable universe might not be the whole thing, and the Big Bang might not have been the beginning, so it’s completely possible that the universe as a whole was never in what one might call a “low-entropy” state. Instead, starting from a generic state, entropy could increase in both directions, leading to a two-sided arrow of time. This has been one of my favorite ideas for a while now, and Alan and I are writing a paper with Chien-Yao Tseng that examines toy models with such behavior.

    Here are some other interesting/provocative answers, picked unsystematically out of over 100,000 words overall. Remember that the titles are what the person wants to retire, not something they’re in favor of.

  • Buchalter Cosmology Prize

    Ari Buchalter is one of the many people who has successfully made the transition from graduate student and researcher in physics (Columbia PhD, Caltech postdoc) to the business world, where he is currently the CEO of MediaMath. But he never lost his interest in theoretical cosmology, which is completely appropriate — how our universe works is something everyone should be interested in, no matter what their day job might be.

    In order to promote innovative thinking in cosmology (experimental as well as theoretical), Ari has founded the Buchalter Cosmology Prize, which was just announced at the meeting of the American Astronomical Society. It will be an annual award, given to the best cosmology papers to have appeared on the arxiv, as decided by a panel of esteemed judges. (I’m one of the esteemed judges, which is a mixed blessing — should be a lot of fun, but it means I can’t win.) Any PhD or current graduate student in physics or astronomy is eligible to submit papers for consideration; this year’s deadline is 30 September. The winner will walk away with $10,000, and even third place will bag you $2,500.

    Currently, cosmology is in a situation where the dominant theoretical framework (Big Bang, Hubble expansion, dark energy and dark matter, possibly primordial inflation) is pretty darn good at fitting the data, but nevertheless has some worrisome conceptual issues. (Was there really inflation? Is there a multiverse? Is the dark energy a cosmological constant, and is the dark matter a WIMP? Why is the vacuum energy so small? Etc.) Hopefully a prize like this will help spur people to be just a tiny bit more bold and imaginative in tackling these issues than they would otherwise be.