Should Scientific Progress Affect Religious Beliefs?

Sure it should. Here’s a new video from Closer to Truth, in which I’m chatting briefly with Robert Lawrence Kuhn about the question. “New” in the sense that it was just put on YouTube, although we taped it back in 2011. (Now my formulations would be considerably more sophisticated, given the wisdom that comes with age).

It’s interesting that the “religious beliefs are completely independent of evidence and empirical investigation” meme has enjoyed such success in certain quarters that people express surprise to learn of the existence of theologians and believers who still think we can find evidence for the existence of God in our experience of the world. In reality, there are committed believers (“sophisticated” and otherwise) who feel strongly that we have evidence for God in the same sense that we have evidence for gluons or dark matter — because it’s the best way to make sense of the data — just as there are others who think that our knowledge of God is of a completely different kind, and therefore escapes scientific critique. It’s part of the problem that theism is not well defined.

One can go further than I did in the brief clip above, to argue that any notion of God that can’t be judged on the basis of empirical evidence isn’t much of a notion at all. If God exists but has no effect on the world whatsoever — the actual world we experience could be precisely the same even without God — then there is no reason to believe in it, and indeed one can draw no conclusions whatsoever (about right and wrong, the meaning of life, etc.) from positing it. Many people recognize this, and fall back on the idea that God is in some sense necessary; there is no possible world in which he doesn’t exist. To which the answer is: “No he’s not.” Defenses of God’s status as necessary ultimately come down to some other assertion of a purportedly-inviolable metaphysical principle, which can always simply be denied. (The theist could win such an argument by demonstrating that the naturalist’s beliefs are incoherent in the absence of such principles, but that never actually happens.)

I have more sympathy for theists who do try to ground their belief in evidence, rather than those who insist that evidence is irrelevant. At least they are playing the game in the right way, even if I disagree with their conclusions. Despite what Robert suggests in the clip above, the existence of disagreement among smart people does not imply that there is not a uniquely right answer!

Posted in Philosophy, Religion | 45 Comments

Effective Field Theory MOOC from MIT

Faithful readers are well aware of the importance of effective field theory in modern physics. EFT provides, in a nutshell, the best way we have to think about the fundamental dynamics of the universe, from the physics underlying everyday life to structure formation in the universe.

And now you can learn about the real thing! MIT is one of the many colleges and universities that is doing a great job putting top-quality lecture courses online, such as the introduction to quantum mechanics I recently mentioned. (See the comments of that post for other goodies.) Now they’ve announced a course at a decidedly non-introductory level: a graduate course in effective field theory, taught by Caltech alumn Iain Stewart. This is the real enchilada, the same stuff a second-year grad student in particle theory at MIT would be struggling with. If you want to learn how to really think about naturalness, or a good way of organizing what we learn from experiments at the LHC, this would be a great place to start. (Assuming you already know the basics of quantum field theory.)

edx-eft

Classes start Sept. 16. I would love to take it myself, but I have other things on my plate at the moment — anyone who does take it, chime in and let us know how it goes.

Posted in Academia, Science | 15 Comments

Single Superfield Inflation: The Trailer

This is amazing. (Via Bob McNees and Michael Nielsen on Twitter.)

Backstory for the puzzled: here is a nice paper that came out last month, on inflation in supergravity.

Inflation in Supergravity with a Single Chiral Superfield
Sergei V. Ketov, Takahiro Terada

We propose new supergravity models describing chaotic Linde- and Starobinsky-like inflation in terms of a single chiral superfield. The key ideas to obtain a positive vacuum energy during large field inflation are (i) stabilization of the real or imaginary partner of the inflaton by modifying a Kahler potential, and (ii) use of the crossing terms in the scalar potential originating from a polynomial superpotential. Our inflationary models are constructed by starting from the minimal Kahler potential with a shift symmetry, and are extended to the no-scale case. Our methods can be applied to more general inflationary models in supergravity with only one chiral superfield.

Supergravity is simply the supersymmetric version of Einstein’s general theory of relativity, but unlike GR (where you can consider just about any old collection of fields to be the “source” of gravity), the constraints of supersymmetry place quite specific requirements on what counts as the “stuff” that creates the gravity. In particular, the allowed stuff comes in the form of “superfields,” which are combinations of boson and fermion fields. So if you want to have inflation within supergravity (which is a very natural thing to want), you have to do a bit of exploring around within the allowed set of superfields to get everything to work. Renata Kallosh and Andrei Linde, for example, have been examining this problem for quite some time.

What Ketov and Terada have managed to do is boil the necessary ingredients down to a minimal amount: just a single superfield. Very nice, and worth celebrating. So why not make a movie-like trailer to help generate a bit of buzz?

Which is just what Takahiro Terada, a PhD student at the University of Tokyo, has done. The link to the YouTube video appeared in an unobtrusive comment in the arxiv page for the revised version of their paper. iMovie provides a template for making such trailers, so it can’t be all that hard to do — but (1) nobody else does it, so, genius, and (2) it’s a pretty awesome job, with just the right touch of humor.

I wouldn’t have paid nearly as much attention to the paper without the trailer, so: mission accomplished. Let’s see if we can’t make this a trend.

Posted in arxiv, Humor, Science | 28 Comments

Quantum Foundations of a Classical Universe

Greetings from sunny (for the moment) Yorktown Heights, NY, home of IBM’s Watson Research Center. I’m behind on respectable blogging (although it’s been nice to see some substantive conversation on the last couple of comment threads), and I’m at a conference all week here, so that situation is unlikely to change dramatically in the next few days.

But the conference should be great — a small workshop, Quantum Foundations of a Classical Universe. We’re going to be arguing about how we’re supposed to connect wave functions and quantum observables to the everyday world of space and stuff. I will mostly be paying attention to the proceedings, but I might occasionally interject a tweet if something interesting/amusing happens. I’m told that some sort of proceedings will eventually be put online.

Update: Trying something new here. I’ve been tweeting about the workshop under the hashtag #quantumfoundations. So here I am using Storify to collect those tweets, making a quasi-live-blog on the cheap. Let’s see if it works. Continue reading

Posted in Science | 26 Comments

Quantum Sleeping Beauty and the Multiverse

Hidden in my papers with Chip Sebens on Everettian quantum mechanics is a simple solution to a fun philosophical problem with potential implications for cosmology: the quantum version of the Sleeping Beauty Problem. It’s a classic example of self-locating uncertainty: knowing everything there is to know about the universe except where you are in it. (Skeptic’s Play beat me to the punch here, but here’s my own take.)

The setup for the traditional (non-quantum) problem is the following. Some experimental philosophers enlist the help of a subject, Sleeping Beauty. She will be put to sleep, and a coin is flipped. If it comes up heads, Beauty will be awoken on Monday and interviewed; then she will (voluntarily) have all her memories of being awakened wiped out, and be put to sleep again. Then she will be awakened again on Tuesday, and interviewed once again. If the coin came up tails, on the other hand, Beauty will only be awakened on Monday. Beauty herself is fully aware ahead of time of what the experimental protocol will be.

So in one possible world (heads) Beauty is awakened twice, in identical circumstances; in the other possible world (tails) she is only awakened once. Each time she is asked a question: “What is the probability you would assign that the coin came up tails?”

Modified from a figure by Stuart Armstrong.

Modified from a figure by Stuart Armstrong.

(Some other discussions switch the roles of heads and tails from my example.)

The Sleeping Beauty puzzle is still quite controversial. There are two answers one could imagine reasonably defending.

  • Halfer” — Before going to sleep, Beauty would have said that the probability of the coin coming up heads or tails would be one-half each. Beauty learns nothing upon waking up. She should assign a probability one-half to it having been tails.
  • Thirder” — If Beauty were told upon waking that the coin had come up heads, she would assign equal credence to it being Monday or Tuesday. But if she were told it was Monday, she would assign equal credence to the coin being heads or tails. The only consistent apportionment of credences is to assign 1/3 to each possibility, treating each possible waking-up event on an equal footing.

The Sleeping Beauty puzzle has generated considerable interest. It’s exactly the kind of wacky thought experiment that philosophers just eat up. But it has also attracted attention from cosmologists of late, because of the measure problem in cosmology. In a multiverse, there are many classical spacetimes (analogous to the coin toss) and many observers in each spacetime (analogous to being awakened on multiple occasions). Really the SB puzzle is a test-bed for cases of “mixed” uncertainties from different sources.

Chip and I argue that if we adopt Everettian quantum mechanics (EQM) and our Epistemic Separability Principle (ESP), everything becomes crystal clear. A rare case where the quantum-mechanical version of a problem is actually easier than the classical version. Continue reading

Posted in Philosophy, Science | 245 Comments

Why Probability in Quantum Mechanics is Given by the Wave Function Squared

One of the most profound and mysterious principles in all of physics is the Born Rule, named after Max Born. In quantum mechanics, particles don’t have classical properties like “position” or “momentum”; rather, there is a wave function that assigns a (complex) number, called the “amplitude,” to each possible measurement outcome. The Born Rule is then very simple: it says that the probability of obtaining any possible measurement outcome is equal to the square of the corresponding amplitude. (The wave function is just the set of all the amplitudes.)

Born Rule:     \mathrm{Probability}(x) = |\mathrm{amplitude}(x)|^2.

The Born Rule is certainly correct, as far as all of our experimental efforts have been able to discern. But why? Born himself kind of stumbled onto his Rule. Here is an excerpt from his 1926 paper:

Born Rule

That’s right. Born’s paper was rejected at first, and when it was later accepted by another journal, he didn’t even get the Born Rule right. At first he said the probability was equal to the amplitude, and only in an added footnote did he correct it to being the amplitude squared. And a good thing, too, since amplitudes can be negative or even imaginary!

The status of the Born Rule depends greatly on one’s preferred formulation of quantum mechanics. When we teach quantum mechanics to undergraduate physics majors, we generally give them a list of postulates that goes something like this:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
  2. Wave functions evolve in time according to the Schrödinger equation.
  3. The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured.
  4. The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue.
  5. After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities).

It’s an ungainly mess, we all agree. You see that the Born Rule is simply postulated right there, as #4. Perhaps we can do better.

Of course we can do better, since “textbook quantum mechanics” is an embarrassment. There are other formulations, and you know that my own favorite is Everettian (“Many-Worlds”) quantum mechanics. (I’m sorry I was too busy to contribute to the active comment thread on that post. On the other hand, a vanishingly small percentage of the 200+ comments actually addressed the point of the article, which was that the potential for many worlds is automatically there in the wave function no matter what formulation you favor. Everett simply takes them seriously, while alternatives need to go to extra efforts to erase them. As Ted Bunn argues, Everett is just “quantum mechanics,” while collapse formulations should be called “disappearing-worlds interpretations.”)

Like the textbook formulation, Everettian quantum mechanics also comes with a list of postulates. Here it is:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
  2. Wave functions evolve in time according to the Schrödinger equation.

That’s it! Quite a bit simpler — and the two postulates are exactly the same as the first two of the textbook approach. Everett, in other words, is claiming that all the weird stuff about “measurement” and “wave function collapse” in the conventional way of thinking about quantum mechanics isn’t something we need to add on; it comes out automatically from the formalism.

The trickiest thing to extract from the formalism is the Born Rule. That’s what Charles (“Chip”) Sebens and I tackled in our recent paper: Continue reading

Posted in arxiv, Philosophy, Science | 96 Comments

Galaxies That Are Too Big To Fail, But Fail Anyway

Dark matter exists, but there is still a lot we don’t know about it. Presumably it’s some kind of particle, but we don’t know how massive it is, what forces it interacts with, or how it was produced. On the other hand, there’s actually a lot we do know about the dark matter. We know how much of it there is; we know roughly where it is; we know that it’s “cold,” meaning that the average particle’s velocity is much less than the speed of light; and we know that dark matter particles don’t interact very strongly with each other. Which is quite a bit of knowledge, when you think about it.

Fortunately, astronomers are pushing forward to study how dark matter behaves as it’s scattered through the universe, and the results are interesting. We start with a very basic idea: that dark matter is cold and completely non-interacting, or at least has interactions (the strength with which dark matter particles scatter off of each other) that are too small to make any noticeable difference. This is a well-defined and predictive model: ΛCDM, which includes the cosmological constant (Λ) as well as the cold dark matter (CDM). We can compare astronomical observations to ΛCDM predictions to see if we’re on the right track.

At first blush, we are very much on the right track. Over and over again, new observations come in that match the predictions of ΛCDM. But there are still a few anomalies that bug us, especially on relatively small (galaxy-sized) scales.

One such anomaly is the “too big to fail” problem. The idea here is that we can use ΛCDM to make quantitative predictions concerning how many galaxies there should be with different masses. For example, the Milky Way is quite a big galaxy, and it has smaller satellites like the Magellanic Clouds. In ΛCDM we can predict how many such satellites there should be, and how massive they should be. For a long time we’ve known that the actual number of satellites we observe is quite a bit smaller than the number predicted — that’s the “missing satellites” problem. But this has a possible solution: we only observe satellite galaxies by seeing stars and gas in them, and maybe the halos of dark matter that would ordinarily support such galaxies get stripped of their stars and gas by interacting with the host galaxy. The too big to fail problem tries to sharpen the issue, by pointing out that some of the predicted galaxies are just so massive that there’s no way they could not have visible stars. Or, put another way: the Milky Way does have some satellites, as do other galaxies; but when we examine these smaller galaxies, they seem to have a lot less dark matter than the simulations would predict.

Still, any time you are concentrating on galaxies that are satellites of other galaxies, you rightly worry that complicated interactions between messy atoms and photons are getting in the way of the pristine elegance of the non-interacting dark matter. So we’d like to check that this purported problem exists even out “in the field,” with lonely galaxies far away from big monsters like the Milky Way.

A new paper claims that yes, there is a too-big-to-fail problem even for galaxies in the field.

Is there a “too big to fail” problem in the field?
Emmanouil Papastergis, Riccardo Giovanelli, Martha P. Haynes, Francesco Shankar

We use the Arecibo Legacy Fast ALFA (ALFALFA) 21cm survey to measure the number density of galaxies as a function of their rotational velocity, Vrot,HI (as inferred from the width of their 21cm emission line). Based on the measured velocity function we statistically connect galaxies with their host halos, via abundance matching. In a LCDM cosmology, low-velocity galaxies are expected to be hosted by halos that are significantly more massive than indicated by the measured galactic velocity; allowing lower mass halos to host ALFALFA galaxies would result in a vast overestimate of their number counts. We then seek observational verification of this predicted trend, by analyzing the kinematics of a literature sample of field dwarf galaxies. We find that galaxies with Vrot,HI<25 km/s are kinematically incompatible with their predicted LCDM host halos, in the sense that hosts are too massive to be accommodated within the measured galactic rotation curves. This issue is analogous to the "too big to fail" problem faced by the bright satellites of the Milky Way, but here it concerns extreme dwarf galaxies in the field. Consequently, solutions based on satellite-specific processes are not applicable in this context. Our result confirms the findings of previous studies based on optical survey data, and addresses a number of observational systematics present in these works. Furthermore, we point out the assumptions and uncertainties that could strongly affect our conclusions. We show that the two most important among them, namely baryonic effects on the abundances and rotation curves of halos, do not seem capable of resolving the reported discrepancy.

Here is the money plot from the paper:

toobigtofail

The horizontal axis is the maximum circular velocity, basically telling us the mass of the halo; the vertical axis is the observed velocity of hydrogen in the galaxy. The blue line is the prediction from ΛCDM, while the dots are observed galaxies. Now, you might think that the blue line is just a very crappy fit to the data overall. But that’s okay; the points represent upper limits in the horizontal direction, so points that lie below/to the right of the curve are fine. It’s a statistical prediction: ΛCDM is predicting how many galaxies we have at each mass, even if we don’t think we can confidently measure the mass of each individual galaxy. What we see, however, is that there are a bunch of points in the bottom left corner that are above the line. ΛCDM predicts that even the smallest galaxies in this sample should still be relatively massive (have a lot of dark matter), but that’s not what we see.

If it holds up, this result is really intriguing. ΛCDM is a nice, simple starting point for a theory of dark matter, but it’s also kind of boring. From a physicist’s point of view, it would be much more fun if dark matter particles interacted noticeably with each other. We have plenty of ideas, including some of my favorites like dark photons and dark atoms. It is very tempting to think that observed deviations from the predictions of ΛCDM are due to some interesting new physics in the dark sector.

Which is why, of course, we should be especially skeptical. Always train your doubt most strongly on those ideas that you really want to be true. Fortunately there is plenty more to be done in terms of understanding the distribution of galaxies and dark matter, so this is a very solvable problem — and a great opportunity for learning something profound about most of the matter in the universe.

Posted in arxiv, Science | 22 Comments

Particle Fever on iTunes

mza_592945757694281252.227x227-75 The documentary film Particle Fever, directed by Mark Levinson and produced by physicist David Kaplan, opened a while back and has been playing on and off in various big cities. But it’s still been out of reach for many people who don’t happen to be lucky enough to live near a theater enlightened enough to play it. No more!

The movie has just been released on iTunes, so now almost everyone can watch it from the comfort of their own computer. And watch it you should — it’s a fascinating and enlightening glimpse into the world of modern particle physics, focusing on the Large Hadron Collider and the discovery of the Higgs boson. That’s not just my bias talking, either — the film is rated 95% “fresh” on RottenTomatoes.com, which represents an amazingly strong critical consensus. (Full disclosure: I’m not in it, and I had nothing to do with making it.)

Huge kudos to Mark (who went to grad school in physics before becoming a filmmaker) and David (who did a brief stint as an undergraduate film major before switching to physics) for pulling this off. It’s great for the public appreciation of science, but it’s also just an extremely enjoyable movie, no matter what your background is. Watch it with a friend!

Posted in Science and the Media | 15 Comments

Why the Many-Worlds Formulation of Quantum Mechanics Is Probably Correct

universe-splitter I have often talked about the Many-Worlds or Everett approach to quantum mechanics — here’s an explanatory video, an excerpt from From Eternity to Here, and slides from a talk. But I don’t think I’ve ever explained as persuasively as possible why I think it’s the right approach. So that’s what I’m going to try to do here. Although to be honest right off the bat, I’m actually going to tackle a slightly easier problem: explaining why the many-worlds approach is not completely insane, and indeed quite natural. The harder part is explaining why it actually works, which I’ll get to in another post.

Any discussion of Everettian quantum mechanics (“EQM”) comes with the baggage of pre-conceived notions. People have heard of it before, and have instinctive reactions to it, in a way that they don’t have to (for example) effective field theory. Hell, there is even an app, universe splitter, that lets you create new universes from your iPhone. (Seriously.) So we need to start by separating the silly objections to EQM from the serious worries.

The basic silly objection is that EQM postulates too many universes. In quantum mechanics, we can’t deterministically predict the outcomes of measurements. In EQM, that is dealt with by saying that every measurement outcome “happens,” but each in a different “universe” or “world.” Say we think of Schrödinger’s Cat: a sealed box inside of which we have a cat in a quantum superposition of “awake” and “asleep.” (No reason to kill the cat unnecessarily.) Textbook quantum mechanics says that opening the box and observing the cat “collapses the wave function” into one of two possible measurement outcomes, awake or asleep. Everett, by contrast, says that the universe splits in two: in one the cat is awake, and in the other the cat is asleep. Once split, the universes go their own ways, never to interact with each other again.

Branching wave function

And to many people, that just seems like too much. Why, this objection goes, would you ever think of inventing a huge — perhaps infinite! — number of different universes, just to describe the simple act of quantum measurement? It might be puzzling, but it’s no reason to lose all anchor to reality.

To see why objections along these lines are wrong-headed, let’s first think about classical mechanics rather than quantum mechanics. And let’s start with one universe: some collection of particles and fields and what have you, in some particular arrangement in space. Classical mechanics describes such a universe as a point in phase space — the collection of all positions and velocities of each particle or field.

What if, for some perverse reason, we wanted to describe two copies of such a universe (perhaps with some tiny difference between them, like an awake cat rather than a sleeping one)? We would have to double the size of phase space — create a mathematical structure that is large enough to describe both universes at once. In classical mechanics, then, it’s quite a bit of work to accommodate extra universes, and you better have a good reason to justify putting in that work. (Inflationary cosmology seems to do it, by implicitly assuming that phase space is already infinitely big.)

That is not what happens in quantum mechanics. The capacity for describing multiple universes is automatically there. We don’t have to add anything.

UBC_SuperpositionThe reason why we can state this with such confidence is because of the fundamental reality of quantum mechanics: the existence of superpositions of different possible measurement outcomes. In classical mechanics, we have certain definite possible states, all of which are directly observable. It will be important for what comes later that the system we consider is microscopic, so let’s consider a spinning particle that can have spin-up or spin-down. (It is directly analogous to Schrödinger’s cat: cat=particle, awake=spin-up, asleep=spin-down.) Classically, the possible states are

“spin is up”

or

“spin is down”.

Quantum mechanics says that the state of the particle can be a superposition of both possible measurement outcomes. It’s not that we don’t know whether the spin is up or down; it’s that it’s really in a superposition of both possibilities, at least until we observe it. We can denote such a state like this:

(“spin is up” + “spin is down”).

While classical states are points in phase space, quantum states are “wave functions” that live in something called Hilbert space. Hilbert space is very big — as we will see, it has room for lots of stuff.

To describe measurements, we need to add an observer. It doesn’t need to be a “conscious” observer or anything else that might get Deepak Chopra excited; we just mean a macroscopic measuring apparatus. It could be a living person, but it could just as well be a video camera or even the air in a room. To avoid confusion we’ll just call it the “apparatus.”

In any formulation of quantum mechanics, the apparatus starts in a “ready” state, which is a way of saying “it hasn’t yet looked at the thing it’s going to observe” (i.e., the particle). More specifically, the apparatus is not entangled with the particle; their two states are independent of each other. So the quantum state of the particle+apparatus system starts out like this: Continue reading

Posted in Philosophy, Science | 237 Comments

Quantum Mechanics Open Course from MIT

Kids today don’t know how good they have it. Back when I was learning quantum mechanics, the process involved steps like “going to lectures.” Not only did that require physical movement from the comfort of one’s home to dilapidated lecture halls, but — get this — you actually had to be there at some pre-arranged time! Often early in the morning.

These days, all you have to do is fire up the YouTube and watch lectures on your own time. MIT has just released an entire undergraduate quantum course, lovingly titled “8.04″ because that’s how MIT rolls. The prof is Allan Adams, who is generally a fantastic lecturer — so I’m suspecting these are really good even though I haven’t actually watched them all myself. Here’s the first lecture, “Introduction to Superposition.”

Allan’s approach in this video is actually based on the first two chapters of Quantum Mechanics and Experience by philosopher David Albert. I’m sure this will be very disconcerting to the philosophy-skeptics haunting the comment section of the previous post.

This is just one of many great physics courses online; I’ve previously noted Lenny Susskind’s GR course. But, being largely beyond my course-taking days myself, I haven’t really kept track. Feel free to suggest your favorites in the comments.

Posted in Science | 16 Comments