Core Theory T-Shirts

Way back when, for purposes of giving a talk, I made a figure that displayed the world of everyday experience in one equation. The label reflects the fact that the laws of physics underlying everyday life are completely understood.

So now there are T-shirts. (See below to purchase your own.)

Core Theory T-shirt

It’s a good equation, representing the Feynman path-integral formulation of an amplitude for going from one field configuration to another one, in the effective field theory consisting of Einstein’s general theory of relativity plus the Standard Model of particle physics. It even made it onto an extremely cool guitar.

I’m not quite up to doing a comprehensive post explaining every term in detail, but here’s the general idea. Our everyday world is well-described by an effective field theory. So the fundamental stuff of the world is a set of quantum fields that interact with each other. Feynman figured out that you could calculate the transition between two configurations of such fields by integrating over every possible trajectory between them — that’s what this equation represents. The thing being integrated is the exponential of the action for this theory — as mentioned, general relativity plus the Standard Model. The GR part integrates over the metric, which characterizes the geometry of spacetime; the matter fields are a bunch of fermions, the quarks and leptons; the non-gravitational forces are gauge fields (photon, gluons, W and Z bosons); and of course the Higgs field breaks symmetry and gives mass to those fermions that deserve it. If none of that makes sense — maybe I’ll do it more carefully some other time.

Gravity is usually thought to be the odd force out when it comes to quantum mechanics, but that’s only if you really want a description of gravity that is valid everywhere, even at (for example) the Big Bang. But if you only want a theory that makes sense when gravity is weak, like here on Earth, there’s no problem at all. The little notation k < Λ at the bottom of the integral indicates that we only integrate over low-frequency (long-wavelength, low-energy) vibrations in the relevant fields. (That's what gives away that this is an "effective" theory.) In that case there's no trouble including gravity. The fact that gravity is readily included in the EFT of everyday life has long been emphasized by Frank Wilczek. As discussed in his latest book, A Beautiful Question, he therefore advocates lumping GR together with the Standard Model and calling it The Core Theory.

I couldn’t agree more, so I adopted the same nomenclature for my own upcoming book, The Big Picture. There’s a whole chapter (more, really) in there about the Core Theory. After finishing those chapters, I rewarded myself by doing something I’ve been meaning to do for a long time — put the equation on a T-shirt, which you see above.

I’ve had T-shirts made before, with pretty grim results as far as quality is concerned. I knew this one would be especially tricky, what with all those tiny symbols. But I tried out Design-A-Shirt, and the result seems pretty impressively good.

So I’m happy to let anyone who might be interested go ahead and purchase shirts for themselves and their loved ones. Here are the links for light/dark and men’s/women’s versions. I don’t actually make any money off of this — you’re just buying a T-shirt from Design-A-Shirt. They’re a little pricey, but that’s what you get for the quality. I believe you can even edit colors and all that — feel free to give it a whirl and report back with your experiences.

Posted in Miscellany | 23 Comments

The Big Picture

Once again I have not really been the world’s most conscientious blogger, have I? Sometimes other responsibilities have to take precedence — such as looming book deadlines. And I’m working on a new book, and that deadline is definitely looming!

Sean Carroll: The Big Picture

And here it is. The title is The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. It’s scheduled to be published on May 17, 2016; you can pre-order it at Amazon and elsewhere right now.

An alternative subtitle was What Is, and What Matters. It’s a cheerfully grandiose (I’m supposed to say “ambitious”) attempt to connect our everyday lives to the underlying laws of nature. That’s a lot of ground to cover: I need to explain (what I take to be) the right way to think about the fundamental nature of reality, what the laws of physics actually are, sketch some cosmology and connect to the arrow of time, explore why there is something rather than nothing, show how interesting complex structures can arise in an undirected universe, talk about the meaning of consciousness and how it can be purely physical, and finally trying to understand meaning and morality in a universe devoid of transcendent purpose. I’m getting tired just thinking about it.

From another perspective, the book is an explication of, and argument for, naturalism — and in particular, a flavor I label Poetic Naturalism. The “Poetic” simply means that there are many ways of talking about the world, and any one that is both (1) useful, and (2) compatible with the underlying fundamental reality, deserves a place at the table. Some of those ways of talking will simply be emergent descriptions of physics and higher levels, but some will also be matters of judgment and meaning.

As of right now the book is organized into seven parts, each with several short chapters. All that is subject to change, of course. But this will give you the general idea.

* Part One: Being and Stories

How we think about the fundamental nature of reality. Poetic Naturalism: there is only one world, but there are many ways of talking about it. Suggestions of naturalism: the world moves by itself, time progresses by moments rather than toward a goal. What really exists.

* Part Two: Knowledge and Belief

Telling different stories about the same underlying truth. Acquiring and updating reliable beliefs. Knowledge of our actual world is never perfect. Constructing consistent planets of belief, guarding against our biases.

* Part Three: Time and Cosmos

The structure and development of our universe. Time’s arrow and cosmic history. The emergence of memories, causes, and reasons. Why is there a universe at all, and is it best explained by something outside itself?

* Part Four: Essence and Possibility

Drawing the boundary between known and unknown. The quantum nature of deep reality: observation, entanglement, uncertainty. Vibrating fields and the Core Theory underlying everyday life. What we can say with confidence about life and the soul.

* Part Five: Complexity and Evolution

Why complex structures naturally arise as the universe moves from order to disorder. Self-organization and incremental progress. The origin of life, and its physical purpose. The anthropic principle, environmental selection, and our role in the universe.

* Part Six: Thinking and Feeling

The mind, the brain, and the body. What consciousness is, and how it might have come to be. Contemplating other times and possible worlds. The emergence of inner experiences from non-conscious matter. How free will is compatible with physics.

* Part Seven: Caring and Mattering

Why we can’t derive ought from is, even if “is” is all there is. And why we nevertheless care about ourselves and others, and why that matters. Constructing meaning and morality in our universe. Confronting the finitude of life, deciding what stories we want to tell along the way.

Hope that whets the appetite a bit. Now back to work with me.

Posted in Personal, Philosophy, Religion, Science, Words | 45 Comments

The Bayesian Second Law of Thermodynamics

Entropy increases. Closed systems become increasingly disordered over time. So says the Second Law of Thermodynamics, one of my favorite notions in all of physics.

At least, entropy usually increases. If we define entropy by first defining “macrostates” — collections of individual states of the system that are macroscopically indistinguishable from each other — and then taking the logarithm of the number of microstates per macrostate, as portrayed in this blog’s header image, then we don’t expect entropy to always increase. According to Boltzmann, the increase of entropy is just really, really probable, since higher-entropy macrostates are much, much bigger than lower-entropy ones. But if we wait long enough — really long, much longer than the age of the universe — a macroscopic system will spontaneously fluctuate into a lower-entropy state. Cream and coffee will unmix, eggs will unbreak, maybe whole universes will come into being. But because the timescales are so long, this is just a matter of intellectual curiosity, not experimental science.

That’s what I was taught, anyway. But since I left grad school, physicists (and chemists, and biologists) have become increasingly interested in ultra-tiny systems, with only a few moving parts. Nanomachines, or the molecular components inside living cells. In systems like that, the occasional downward fluctuation in entropy is not only possible, it’s going to happen relatively frequently — with crucial consequences for how the real world works.

Accordingly, the last fifteen years or so has seen something of a revolution in non-equilibrium statistical mechanics — the study of statistical systems far from their happy resting states. Two of the most important results are the Crooks Fluctuation Theorem (by Gavin Crooks), which relates the probability of a process forward in time to the probability of its time-reverse, and the Jarzynski Equality (by Christopher Jarzynski), which relates the change in free energy between two states to the average amount of work done on a journey between them. (Professional statistical mechanics are so used to dealing with inequalities that when they finally do have an honest equation, they call it an “equality.”) There is a sense in which these relations underlie the good old Second Law; the Jarzynski equality can be derived from the Crooks Fluctuation Theorem, and the Second Law can be derived from the Jarzynski Equality. (Though the three relations were discovered in reverse chronological order from how they are used to derive each other.)

Still, there is a mystery lurking in how we think about entropy and the Second Law — a puzzle that, like many such puzzles, I never really thought about until we came up with a solution. Boltzmann’s definition of entropy (logarithm of number of microstates in a macrostate) is very conceptually clear, and good enough to be engraved on his tombstone. But it’s not the only definition of entropy, and it’s not even the one that people use most often.

Rather than referring to macrostates, we can think of entropy as characterizing something more subjective: our knowledge of the state of the system. That is, we might not know the exact position x and momentum p of every atom that makes up a fluid, but we might have some probability distribution ρ(x,p) that tells us the likelihood the system is in any particular state (to the best of our knowledge). Then the entropy associated with that distribution is given by a different, though equally famous, formula:

S = - \int \rho \log \rho.

That is, we take the probability distribution ρ, multiply it by its own logarithm, and integrate the result over all the possible states of the system, to get (minus) the entropy. A formula like this was introduced by Boltzmann himself, but these days is often associated with Josiah Willard Gibbs, unless you are into information theory, where it’s credited to Claude Shannon. Don’t worry if the symbols are totally opaque; the point is that low entropy means we know a lot about the specific state a system is in, and high entropy means we don’t know much at all.

In appropriate circumstances, the Boltzmann and Gibbs formulations of entropy and the Second Law are closely related to each other. But there’s a crucial difference: in a perfectly isolated system, the Boltzmann entropy tends to increase, but the Gibbs entropy stays exactly constant. In an open system — allowed to interact with the environment — the Gibbs entropy will go up, but it will only go up. It will never fluctuate down. (Entropy can decrease through heat loss, if you put your system in a refrigerator or something, but you know what I mean.) The Gibbs entropy is about our knowledge of the system, and as the system is randomly buffeted by its environment we know less and less about its specific state. So what, from the Gibbs point of view, can we possibly mean by “entropy rarely, but occasionally, will fluctuate downward”?

I won’t hold you in suspense. Since the Gibbs/Shannon entropy is a feature of our knowledge of the system, the way it can fluctuate downward is for us to look at the system and notice that it is in a relatively unlikely state — thereby gaining knowledge.

But this operation of “looking at the system” doesn’t have a ready implementation in how we usually formulate statistical mechanics. Until now! My collaborators Tony Bartolotta, Stefan Leichenauer, Jason Pollack, and I have written a paper formulating statistical mechanics with explicit knowledge updating via measurement outcomes. (Some extra figures, animations, and codes are available at this web page.)

The Bayesian Second Law of Thermodynamics
Anthony Bartolotta, Sean M. Carroll, Stefan Leichenauer, and Jason Pollack

We derive a generalization of the Second Law of Thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter’s knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically-evolving system degrades over time. The Bayesian Second Law can be written as ΔH(ρm,ρ)+⟨Q⟩F|m≥0, where ΔH(ρm,ρ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm, and ⟨Q⟩F|m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the Second Law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of the Jarzynski equality. We demonstrate the formalism using simple analytical and numerical examples.

The crucial word “Bayesian” here refers to Bayes’s Theorem, a central result in probability theory. Continue reading

Posted in arxiv, Science, Time | 49 Comments

Hypnotized by Quantum Mechanics

It remains embarrassing that physicists haven’t settled on the best way of formulating quantum mechanics (or some improved successor to it). I’m partial to Many-Worlds, but there are other smart people out there who go in for alternative formulations: hidden variables, dynamical collapse, epistemic interpretations, or something else. And let no one say that I won’t let alternative voices be heard! (Unless you want to talk about propellantless space drives, which are just crap.)

So let me point you to this guest post by Anton Garrett that Peter Coles just posted at his blog:

Hidden Variables: Just a Little Shy?

It’s quite a nice explanation of how the state of play looks to someone who is sympathetic to a hidden-variables view. (Fans of Bell’s Theorem should remember that what Bell did was to show that such variables must be nonlocal, not that they are totally ruled out.)

As a dialogue, it shares a feature that has been common to that format since the days of Plato: there are two characters, and the character that sympathizes with the author is the one who gets all the good lines. In this case the interlocutors are a modern physicist Neo, and a smart recently-resurrected nineteenth-century physicist Nino. Trained in the miraculous successes of the Newtonian paradigm, Nino is very disappointed that physicists of the present era are so willing to simply accept a theory that can’t do better than predicting probabilistic outcomes for experiments. More in sorrow than in anger, he urges us to do better!

My own takeaway from this is that it’s not a good idea to take advice from nineteenth-century physicists. Of course we should try to do better, since we should alway try that. But we should also feel free to abandon features of our best previous theories when new data and ideas come along.

A nice feature of the dialogue between Nino and Neo is the way in which it illuminates the fact that much of one’s attitude toward formulations of quantum mechanics is driven by which basic assumptions about the world we are most happy to abandon, and which we prefer to cling to at any cost. That’s true for any of us — such is the case when there is legitimate ambiguity about the best way to move forward in science. It’s a feature, not a bug. The hope is that eventually we will be driven, by better data and theories, toward a common conclusion.

What I like about Many-Worlds is that it is perfectly realistic, deterministic, and ontologically minimal, and of course it fits the data perfectly. Equally importantly, it is a robust and flexible framework: you give me your favorite Hamiltonian, and we instantly know what the many-worlds formulation of the theory looks like. You don’t have to think anew and invent new variables for each physical situation, whether it’s a harmonic oscillator or quantum gravity.

Of course, one gives something up: in Many-Worlds, while the underlying theory is deterministic, the experiences of individual observers are not predictable. (In that sense, I would say, it’s a nice compromise between our preferences and our experience.) It’s neither manifestly local nor Lorentz-invariant; those properties should emerge in appropriate situations, as often happens in physics. Of course there are all those worlds, but that doesn’t bother me in the slightest. For Many-Worlds, it’s the technical problems that bother me, not the philosophical ones — deriving classicality, recovering the Born Rule, and so on. One tends to think that technical problems can be solved by hard work, while metaphysical ones might prove intractable, which is why I come down the way I do on this particular question.

But the hidden-variables possibility is still definitely alive and well. And the general program of “trying to invent a better theory than quantum mechanics which would make all these distasteful philosophical implications go away” is certainly a worthwhile one. If anyone wants to suggest their favorite defenses of epistemic or dynamical-collapse approaches, feel free to leave them in comments.

Posted in Philosophy, Science | 102 Comments

Spacetime, Storified

I had some spare minutes the other day, and had been thinking about the fate of spacetime in a quantum universe, so I took to the internet to let my feelings be heard. Only a few minutes, though, so I took advantage of Twitter rather than do a proper blog post. But through the magic of Storify, I can turn the former into the latter!

Obviously the infamous 140-character limit of Twitter doesn’t allow the level of precision and subtlety one would always like to achieve when talking about difficult topics. But restrictions lead to creativity, and the results can actually be a bit more accessible than unfettered prose might have been.

Anyway, spacetime isn’t fundamental, it’s just a useful approximation in certain regimes. Someday we hope to know what it’s an approximation to.

Posted in Science | 16 Comments

Guest Post: Aidan Chatwin-Davies on Recovering One Qubit from a Black Hole

47858f217602be036c32e8ac76271a75_400x400 The question of how information escapes from evaporating black holes has puzzled physicists for almost forty years now, and while we’ve learned a lot we still don’t seem close to an answer. Increasingly, people who care about such things have been taking more seriously the intricacies of quantum information theory, and learning how to apply that general formalism to the specific issues of black hole information.

Now two students and I have offered a small contribution to this effort. Aidan Chatwin-Davies is a grad student here at Caltech, while Adam Jermyn was an undergraduate who has now gone on to do graduate work at Cambridge. Aidan came up with a simple method for getting out one “quantum bit” (qubit) of information from a black hole, using a strategy similar to “quantum teleportation.” Here’s our paper that just appeared on arxiv:

How to Recover a Qubit That Has Fallen Into a Black Hole
Aidan Chatwin-Davies, Adam S. Jermyn, Sean M. Carroll

We demonstrate an algorithm for the retrieval of a qubit, encoded in spin angular momentum, that has been dropped into a no-firewall unitary black hole. Retrieval is achieved analogously to quantum teleportation by collecting Hawking radiation and performing measurements on the black hole. Importantly, these methods only require the ability to perform measurements from outside the event horizon and to collect the Hawking radiation emitted after the state of interest is dropped into the black hole.

It’s a very specific — i.e. not very general — method: you have to have done measurements on the black hole ahead of time, and then drop in one qubit, and we show how to get it back out. Sadly it doesn’t work for two qubits (or more), so there’s no obvious way to generalize the procedure. But maybe the imagination of some clever person will be inspired by this particular thought experiment to come up with a way to get out two qubits, and we’ll be off.

I’m happy to host this guest post by Aidan, explaining the general method behind our madness.

If you were to ask someone on the bus which of Stephen Hawking’s contributions to physics he or she thought was most notable, the answer that you would almost certainly get is his prediction that a black hole should glow as if it were an object with some temperature. This glow is made up of thermal radiation which, unsurprisingly, we call Hawking radiation. As the black hole radiates, its mass slowly decreases and the black hole decreases in size. So, if you waited long enough and were careful not to enlarge the black hole by throwing stuff back in, then eventually it would completely evaporate away, leaving behind nothing but a bunch of Hawking radiation.

At a first glance, this phenomenon of black hole evaporation challenges a central notion in quantum theory, which is that it should not be possible to destroy information. Suppose, for example, that you were to toss a book, or a handful of atoms in a particular quantum state into the black hole. As the black hole evaporates into a collection of thermal Hawking particles, what happens to the information that was contained in that book or in the state of (what were formerly) your atoms? One possibility is that the information actually is destroyed, but then we would have to contend with some pretty ugly foundational consequences for quantum theory. Instead, it could be that the information is preserved in the state of the leftover Hawking radiation, albeit highly scrambled and difficult to distinguish from a thermal state. Besides being very pleasing on philosophical grounds, we also have evidence for the latter possibility from the AdS/CFT correspondence. Moreover, if the process of converting a black hole to Hawking radiation conserves information, then a stunning result of Hayden and Preskill says that for sufficiently old black holes, any information that you toss in comes back out almost a fast as possible!

Even so, exactly how information leaks out of a black hole and how one would go about converting a bunch of Hawking radiation to a useful state is quite mysterious. On that note, what we did in a recent piece of work was to propose a protocol whereby, under very modest and special circumstances, you can toss one qubit (a single unit of quantum information) into a black hole and then recover its state, and hence the information that it carried.

More precisely, the protocol describes how to recover a single qubit that is encoded in the spin angular momentum of a particle, i.e., a spin qubit. Spin is a property that any given particle possesses, just like mass or electric charge. For particles that have spin equal to 1/2 (like those that we consider in our protocol), at least classically, you can think of spin as a little arrow which points up or down and says whether the particle is spinning clockwise or counterclockwise about a line drawn through the arrow. In this classical picture, whether the arrow points up or down constitutes one classical bit of information. According to quantum mechanics, however, spin can actually exist in a superposition of being part up and part down; these proportions constitute one qubit of quantum information.


So, how does one throw a spin qubit into a black hole and get it back out again? Suppose that Alice is sitting outside of a black hole, the properties of which she is monitoring. From the outside, a black hole is characterized by only three properties: its total mass, total charge, and total spin. This latter property is essentially just a much bigger version of the spin of an individual particle and will be important for the protocol.

Next, suppose that Alice accidentally drops a spin qubit into the black hole. First, she doesn’t panic. Instead, she patiently waits and collects one particle of Hawking radiation from the black hole. Crucially, when a Hawking particle is produced by the black hole, a bizarro version of the same particle is also produced, but just behind the black hole’s horizon (boundary) so that it falls into the black hole. This bizarro ingoing particle is the same as the outgoing Hawking particle, but with opposite properties. In particular, its spin state will always be flipped relative to the outgoing Hawking particle. (The outgoing Hawking particle and the ingoing particle are entangled, for those in the know.)


The picture so far is that Alice, who is outside of the black hole, collects a single particle of Hawking radiation whilst the spin qubit that she dropped and the ingoing bizarro Hawking particle fall into the black hole. When the dropped particle and the bizarro particle fall into the black hole, their spins combine with the spin of the black hole—but remember! The bizarro particle’s spin was highly correlated with the spin of the outgoing Hawking particle. As such, the new combined total spin of the black hole becomes highly correlated with the spin of the outgoing Hawking particle, which Alice now holds. So, Alice measures the black hole’s new total spin state. Then, essentially, she can exploit the correlations between her held Hawking particle and the black hole to transfer the old spin state of the particle that she dropped into the hole to the Hawking particle that she now holds. Alice’s lost qubit is thus restored. Furthermore, Alice didn’t even need to know the precise state that her initial particle was in to begin with; the qubit is recovered regardless!

That’s the protocol in a nutshell. If the words “quantum teleportation” mean anything to you, then you can think of the protocol as a variation on the quantum teleportation protocol where the transmitting party is the black hole and measurement is performed in the total angular momentum basis instead of the Bell basis. Of course, this is far from a resolution of the information problem for black holes. However, it is certainly a neat trick which shows, in a special set of circumstances, how to “bounce” a qubit of quantum information off of a black hole.

Posted in arxiv, Guest Post, Science | 23 Comments

Why is the Universe So Damn Big?

I love reading io9, it’s such a fun mixture of science fiction, entertainment, and pure science. So I was happy to respond when their writer George Dvorsky emailed to ask an innocent-sounding question: “Why is the scale of the universe so freakishly large?”

You can find the fruits of George’s labors at this io9 post. But my own answer went on at sufficient length that I might as well put it up here as well. Of course, as with any “Why?” question, we need to keep in mind that the answer might simply be “Because that’s the way it is.”

Whenever we seem surprised or confused about some aspect of the universe, it’s because we have some pre-existing expectation for what it “should” be like, or what a “natural” universe might be. But the universe doesn’t have a purpose, and there’s nothing more natural than Nature itself — so what we’re really trying to do is figure out what our expectations should be.

The universe is big on human scales, but that doesn’t mean very much. It’s not surprising that humans are small compared to the universe, but big compared to atoms. That feature does have an obvious anthropic explanation — complex structures can only form on in-between scales, not at the very largest or very smallest sizes. Given that living organisms are going to be complex, it’s no surprise that we find ourselves at an in-between size compared to the universe and compared to elementary particles.

What is arguably more interesting is that the universe is so big compared to particle-physics scales. The Planck length, from quantum gravity, is 10^{-33} centimeters, and the size of an atom is roughly 10^{-8} centimeters. The difference between these two numbers is already puzzling — that’s related to the “hierarchy problem” of particle physics. (The size of atoms is fixed by the length scale set by electroweak interactions, while the Planck length is set by Newton’s constant; the two distances are extremely different, and we’re not sure why.) But the scale of the universe is roughly 10^29 centimeters across, which is enormous by any scale of microphysics. It’s perfectly reasonable to ask why.

Part of the answer is that “typical” configurations of stuff, given the laws of physics as we know them, tend to be very close to empty space. (“Typical” means “high entropy” in this context.) That’s a feature of general relativity, which says that space is dynamical, and can expand and contract. So you give me any particular configuration of matter in space, and I can find a lot more configurations where the same collection of matter is spread out over a much larger volume of space. So if we were to “pick a random collection of stuff” obeying the laws of physics, it would be mostly empty space. Which our universe is, kind of.

Two big problems with that. First, even empty space has a natural length scale, which is set by the cosmological constant (energy of the vacuum). In 1998 we discovered that the cosmological constant is not quite zero, although it’s very small. The length scale that it sets (roughly, the distance over which the curvature of space due to the cosmological constant becomes appreciable) is indeed the size of the universe today — about 10^26 centimeters. (Note that the cosmological constant itself is inversely proportional to this length scale — so the question “Why is the cosmological-constant length scale so large?” is the same as “Why is the cosmological constant so small?”)

This raises two big questions. The first is the “coincidence problem”: the universe is expanding, but the length scale associated with the cosmological constant is a constant, so why are they approximately equal today? The second is simply the “cosmological constant problem”: why is the cosmological constant scale so enormously larger than the Planck scale, or event than the atomic scale? It’s safe to say that right now there are no widely-accepted answers to either of these questions.

So roughly: the answer to “Why is the universe so big?” is “Because the cosmological constant is so small.” And the answer to “Why is the cosmological constant so small?” is “Nobody knows.”

But there’s yet another wrinkle. Typical configurations of stuff tend to look like empty space. But our universe, while relatively empty, isn’t *that* empty. It has over a hundred billion galaxies, with a hundred billion stars each, and over 10^50 atoms per star. Worse, there are maybe 10^88 particles (mostly photons and neutrinos) within the observable universe. That’s a lot of particles! A much more natural state of the universe would be enormously emptier than that. Indeed, as space expands the density of particles dilutes away — we’re headed toward a much more natural state, which will be much emptier than the universe we see today.

So, given what we know about physics, the real question is “Why are there so many particles in the observable universe?” That’s one angle on the question “Why is the entropy of the observable universe so small?” And of course the density of particles was much higher, and the entropy much lower, at early times. These questions are also ones to which we have no good answers at the moment.

Posted in Science | 68 Comments

Yoichiro Nambu

yoichiro_nambu It was very sad to hear yesterday that Yoichiro Nambu has died. He was aged 94, so it was after a very long and full life.

Nambu was one of the greatest theoretical physicists of the 20th century, although not one with a high public profile. Among his contributions:

  • Being the first to really understand spontaneous symmetry breaking in quantum field theory, work for which he won a (very belated) Nobel Prize in 2008. We now understand the pion as a (pseudo-) “Nambu-Goldstone boson.”
  • Suggesting that quarks might come in three colors, and those colors might be charges for an SU(3) gauge symmetry, giving rise to force-carrying particles called gluons.
  • Proposing the first relativistic string theory, based on what is now called the Nambu-Goto action.

So — not too shabby.

But despite his outsized accomplishments, Nambu was quiet, proper, it’s even fair to say “shy.” He was one of those physicists who talked very little, and was often difficult to understand when he does talk, but if you put in the effort to follow him you would invariably be rewarded. One of his colleagues at the University of Chicago, Bruce Winstein, was charmed by the fact that Nambu was an experimentalist at heart; at home, apparently, he kept a little lab, where he would tinker with electronics to take a break from solving equations.

Any young person in science might want to read this profile of Nambu by his former student Madhusree Mukerjee. In it, Nambu tells of when he first came to the US from Japan, to be a postdoctoral researcher at the Institute for Advanced Study in Princeton. “Everyone seemed smarter than I,” Nambu recalls. “I could not accomplish what I wanted to and had a nervous breakdown.”

If Yoichiro Nambu can have a nervous breakdown because he didn’t feel smart enough, what hope is there for the rest of us?

Here are a few paragraphs I wrote about Nambu and spontaneous symmetry breaking in The Particle at the End of the Universe. Continue reading

Posted in Science | 11 Comments

Infinite Monkey Cage

The Infinite Monkey Cage is a British science/entertainment show put on by the dynamic duo of physicist Brian Cox and comedian Robin Ince. It exists as a radio program, a podcast, and an occasional live show. There are laughs, a bit of education, and some guests for the hosts to spar with. The popular-science ecosystem is a lot different in the UK than it is here in the US; scientists and science communicators can generally have a much higher profile, and a show like this can really take off.

So it was a great honor for me to appear as one of the guests when the show breezed through LA back in March. It was a terrific event, as you might guess from the other guests: comedian Joe Rogan, TV writer David X. Cohen, and one Eric Idle, who used to play in the Rutles. And now selected bits of the program can be listened to at home, courtesy of this handy podcast link, or directly on iTunes.


Be sure to check out the other stops on the IMC tour of the US, which included visits to NYC, Chicago, and San Francisco, featuring many friends-of-the-blog along the way.

These guys, of course, are heavy hitters, so you never know who is going to show up at one of these things. Their relationship with Eric Idle goes back quite a ways, and he actually composed and performed a theme song for the show (below). Naturally, since he was on stage in LA, they asked him to do a live version, which was a big hit. And there in the band, performing on ukulele for just that one song, was Jeff Lynne, of the Electric Light Orchestra. Maybe a bit under-utilized in this context, but why not get the best when you can?

Posted in Entertainment, Science and the Media | 12 Comments

Why Is There Dark Matter?

Years ago I read an article by Martin Rees, in which he surveyed the options for what the dark matter of the universe might be. I forget the exact wording, but near the end he said something like “There are so many candidates, it would be quite surprising to find ourselves living in a universe without dark matter.”

I was reminded of this when I saw a Quantum Diaries post by Alex Millar, entitled “Why Dark Matter Exists.” Why do we live in a universe with five times as much dark matter as ordinary matter, anyway? As it turns out, the post was more about explaining all of the wonderful evidence we have that there is so much dark matter. That’s a very respectable question, one that I’ve covered now and again. The less-respectable (but still interesting to me) question is, Why is the universe like that? Is the existence of dark matter indeed unsurprising, or is it an unusual feature that we should take as an important clue as to the nature of our world?


Generally, physicists love asking these kinds of questions (“why does the universe look this way, rather than that way?”), and yet are terribly sloppy at answering them. Questions about surprise and probability require a measure: a way of assigning, to each set of possibilities, some kind of probability number. Your answer wholly depends on how you assign that measure. If you have a coin, and your probability measure is “it will be heads half the time and tails half the time,” then getting twenty heads in a row is very surprising. If you have reason to think the coin is loaded, and your measure is “it comes up heads almost every time,” then twenty heads in a row isn’t surprising at all. Yet physicists love to bat around these questions in reference to the universe itself, without really bothering to justify one measure rather than another.

With respect to dark matter, we’re contemplating a measure over all the various ways the universe could be, including both the laws of physics (which tell us what particles there can be) and the initial conditions (which set the stage for the later evolution). Clearly finding the “right” such measure is pretty much hopeless! But we can try to set up some reasonable considerations, and see where that leads us. Continue reading

Posted in Science | 53 Comments