Monday morning: The Case for Multiverses
9:00: We start today as we ended yesterday: with a talk by Martin Rees, who has done quite a bit to popularize the idea of a multiverse. He wants to argue that thinking about the multiverse doesn’t represent any sort of departure from the usual way we do science.
The Big Bang model, from 1 second to today, is as uncontroversial as anything a geologist does. Easily falsifiable, but it passes all tests. How far does the domain of physical cosmology extend? We only see the universe out to the microwave background, but nothing happens out there — it seems pretty uniform, suggesting that conditions inside extend pretty far outside. Could be very far, but hard to say for sure.
Some people want to talk only about the observable universe. Those folks need aversion therapy. After all, whether a particular distant galaxy eventually becomes observable depends on details of cosmic history. There’s no sharp epistemological distinction between the observable and unobservable parts of the universe. We need to ask whether quantities characterizing our observable part of the universe are truly universal, or merely local.
So: what values of these parameters are consistent with some kind of complexity? (No need to explicitly invoke the “A-word.”) Need gravity, and the weaker the better. Need at least one very large number; in our universe it’s the ratio of gravity to electromagnetic forces between elementary particles. Also need departure from thermodynamic equilibrium. Also: matter/antimatter symmetry, and some kind of non-trivial chemistry. (Tuning between electromagnetic and nuclear forces?) At least one star, arguably a second-generation star so that we have heavy elements. We also need a tuned cosmic expansion rate, to let the universe last long enough without being completely emptied out, and some non-zero fluctuations in density from place to place.
If the amplitude of density perturbations were much smaller, the universe would be anemic: you would have fewer first-generation stars, and perhaps no second-generation stars. If the amplitude were much larger, we would form huge black holes very early, and again we might not get stars. But ten times the observed amplitude would actually be kind of interesting. Given an amplitude of density perturbations, there’s an upper limit on the cosmological constant, so that structure can form. Again, larger perturbations would allow for a significantly larger cosmological constant — why don’t we live in such a universe? Similar arguments can be made about the ratio of dark matter to ordinary matter.
Having said all that, we need a fundamental theory to get anywhere. It should either determine all constants of nature uniquely, in which case anthropic reasoning has no role, or it allows ranges of parameters within the physical universe, in which case anthropics are unavoidable.
10:00: Next up, Philip Candelas to talk about probabilities in the landscape. The title he actually puts on the screen is: “Calabi-Yau Manifolds with Small Hodge Numbers, or A Des Res in the Landscape.”
A Calabi-Yau is the kind of manifold you need in string theory to compactly ten dimensions down to four, picked out among all possible manifolds by the requirement that we preserve supersymmetry. There are many examples, and you can characterize them by topological invariants as well as by continuous parameters. But there is a special corner in the space of Calabi-Yau’s where certain topological invariants (Hodge numbers) are relatively small; these seem like promising places to think about phenomenology — e.g. there are three generations of elementary particles.
Different embeddings lead to different gauge groups in four dimensions: E6, SO(10), or SU(5). Various models with three generations can be found. Putting flux on the Calabi-Yau can break the gauge group down to the Standard Model, sometimes with additional U(1)’s.
11:15: Robert Brandenberger steps up to talk about probability measures and initial data for inflation. The title was assigned by the organizers, and he claims that he got scared by it and started having sleepless nights, so he changed it: “Initial Conditions for Early Universe Scenarios.” He wants to consider alternatives to inflation.
Inflation is great; solves lots of problems, and provides a mechanism for density perturbations. But it’s not the only way to produce primordial perturbations, and there are problems, such as the need to consider modes with wavelengths shorter than the Planck length.
One alternative is string gas cosmology. Predicts a slight red tilt for density perturbations, but a slight blue tilt for gravitational waves. String theory has a maximum temperature (the Hagedorn temperature); when you squeeze strings to sufficiently high density you enter a dual phase where the temperature goes down. We could loiter in a metastable phase at constant temperature before the universe began expanding. Might explain why only three dimensions become large.
Another alternative is a matter bounce, in which the universe first contracts and then bounces back to expansion. Predicts scale-free perturbations for both density and gravitational waves, but the bispectrum can be large.
The final idea is the ekpyrotic/cyclic scenario. Three large dimensions of space appear to bounce, while branes are crashing together in extra dimensions. Predictions are hard to make because we’re not really sure how to resolve the singularity — but a scale-invariant spectrum is conceivable.
Now about initial conditions. It can be hard to get inflation to start, but it becomes easier if we start in a false vacuum and then tunnel to inflation. For the string gas, thermal equilibrium looks like a local attractor, so that’s also okay. For the matter bounce, on the other hand, the assumed background is unstable — so that’s a problem. Maybe you can do it if the bounce is generated by gravity rather than by matter. Likewise for the ekpyrotic universe. Cyclic universe is a bit better; essentially we are currently going through “inflation” at a very low scale.
Brian Greene begins by breaking the overhead projector, so he has to speak without slides. “Can the multiverse” be tested is an unanswerable question, because it depends on what kind of multiverse theory you actually have. E.g. if we live in some specific part of the string landscape, we could in principle probe details of the compactification manifold, to figure out which one it is. If there are bubble universes, we might observe collisions between bubbles. The landscape might predict relationships between observable quantities. Or, if you had a microphysical theory that you had tested well enough to accept it, and it predicted a multiverse, that would be a sensible conclusion to accept. On the other hand: eternal inflation undermines many of the successes of inflation, because it removes the possibility of unique predictions. Need to have a measure. In quantum mechanics, a measure can emerge directly from a theory; maybe inflation will eventually get to that point.
Jean-Philippe Uzan brings up the example of general relativity — why do we pick the Einstein-Hilbert action over any others? We take it as a simple choice, but we can also look for deviations, and constrain them experimentally. When do we simply accept a theory, vs. putting great effort into looking for modifications of it?
Andrei Linde laments that he is playing the role of Big Brother — watching you in case you claim to have a better theory than inflation. He studied string gas cosmology some time ago, but found it to be somewhat ill-defined. We don’t really know enough about it to say what it predicts. The bouncing cosmologies studied within Horava-Lifshitz gravity are very new, and haven’t been studied carefully. He has looked carefully at ekpyrotic models, but they are also a moving target. It seems difficult for perturbations to travel smoothly through a singularity.
Monday afternoon: Fine Tuning and Anthropic Arguments
2:00: We kick off the afternoon with Sir Roger Penrose talking about entropy issues for cosmology. Penrose has enough clout that they have brought out the overhead projector so he can show his famous hand-drawn slides.
The Second Law of Thermodynamics is mysterious. Part of it is perfectly clear: entropy increases because there are more high-entropy states than low-entropy ones. The mystery is why entropy was lower in the past — all the way back to the Big Bang. Why did the Bang have such a low entropy? What was the nature of the constraint on those early conditions? Things were very smooth. Sounds high-entropy, but not when you take gravity into account. In a fixed box, the highest-entropy state will often be a black hole (maximally non-uniform).
Inflation purportedly explains the smooth conditions of the early universe. However, there are many more non-inflationary initial conditions corresponding to our current universe than inflationary ones. Anthropic principle doesn’t really help; it’s much easier to make a tiny region of universe suitable for life than to make a patch ready to undergo inflation.
The Weyl Curvature Hypothesis conjectures an explicit time-asymmetry in the laws of nature: singularities in the past are smooth (have zero Weyl curvature), while singularities in the future can be arbitrarily chaotic. That would determine the arrow of time. After all, at high temperatures particles are essentially massless (compared to their energies), and therefore nearly conformally invariant. Maybe we can extend that conformal structure to before the Big Bang. In fact, we could imagine a conformal cyclic cosmology, matching the smooth far future onto the smooth early past. We can even think of observational consequences: perhaps calculate density perturbations induced by the fact that the “late” empty universe is not precisely de Sitter. David Spergel at Princeton has a student looking for circles in the CMB, which might be predicted by this model.
Despite all the talks that have alluded to measures on the multiverse, Vilenkin is the first to talk very specifically about different proposals and their problems. The problem is that spacetime (in a typical multiverse model) is infinitely big. One generally chooses some cutoff to define a finite region of spacetime, calculate ratios of different conditions in that finite region, then let the cutoff go to infinity. That only makes sense if the result is not sensitive to the specific cutoff; but usually it is sensitive. The only measure that works is the scale factor cutoff; but Vilenkin admits that other people (even within the room) have very different ideas. Of course the right answer should be derived from a deeper theory, not simply postulated. He has hopes for the holographic measure.
Hall is the first particle phenomenologist we’ve heard at the conference. He wants to ask whether we can use the multiverse to make predictions about particle properties. Evidence might be quantitative (mass of the electron) or structural (why three gauge forces?). An old-school particle physicist would try to invent new symmetries; a new-school multiverse physicist calculates probability distributions and places anthropic cutoffs. If we lie near an anthropic boundary, that might be taken as evidence for a multiverse. Low-scale supersymmetry is a possible explanation for the hierarchy between the weak scale and the Planck scale; however, there is also an anthropic bound which really isn’t all that different. And note that we certainly could have — one might say “should have” — already detected low-scale supersymmetry if it existed. Similarly, there is an anthropic boundary in Higgs/top-quark parameter space; being pushed near that boundary suggests that we will find the Higgs near 112 GeV and the top near 177 GeV. If the Higgs is there, that’s a quantitative prediction good at the 5% level. If we then don’t find supersymmetry, the multiverse would be the only known explanation. Many aspects of the multiverse theory can’t be tested, but that’s true for any theory. The right question is, “Can we obtain sufficient evidence to be convinced?”
Wallace is a philosopher who was asked to talk about the Everett interpretation of quantum mechanics, and possible connections to the multiverse idea. But he’s not sure what those connections are, so he apologizes for being a bit vague. The Everett (a/k/a many-worlds) interpretation is simply the idea that we should take unitary quantum mechanics (evolution obeying the Schrodinger equation) as literally true and complete — no wave-function collapse. It implies a kind of multiverse: many mutually decohering histories. A bit different from the kind of spatial multiverse discussed in cosmology, but there are some similarities. Encouragingly, we used to be very confused about how to calculate probabilities within many-worlds, but we’ve lately made great strides toward figuring that out. We might be optimistic that the right measure in the cosmological multiverse might simply be the quantum-mechanical measure, properly applied.
Barrow points out that many apparent fine-tunings are ultimately explained by theory. Most particles in nature are identical; that’s explained by quantum field theory. Inertial mass equals gravitational mass; explained by general relativity. If there was any random element in the early universe, anthropic considerations are needed. Barrow and Tipler put an anthropic upper bound on the cosmological constant back in 1986. Maybe the vacuum energy is changing with time, as Sorkin suggested.
Next up, Alex Pruss discusses the different meanings of “fine tuning” between physicists and philosophers of religion. Physicists are thinking of parameters taking on unlikely values within some presumed distribution; philosophers of religion are thinking about parameters that are tuned to allow for the existence of life. Philosophers of religion, of course, don’t worry about testability. But the fact that so many physicists at this conference keep insisting that it’s okay if some aspects of the multiverse are not testable is evidence that there must be people who don’t think it’s okay. The move by the scientists is to say that only specific models are testable. That’s a respectable move, and one that philosophers of religion might want to emulate. Is there some specific theory of what God is trying to maximize? Simplicity of laws, for example? On the other hand, philosophers of religion do something worthwhile, in that they consider a wider variety of allowed explanations.
Robin Collins digs into a foundational question: what does probability mean? One necessary criterion: a notion of probability must conform to rational expectation. In cosmology, this is hard to get right. Consider the flatness problem: we look at the density parameter at early times, and ask why it’s so close to one. But why look at that, rather than some crazy function of that parameter? We should also think carefully about the purported feature of philosophy of religion that it considers a wider variety of explanatory schemes. Can we really separate out these schemes from the search by ordinary science for simple and robust explanations of observed phenomena? Inflation is not a panacea — you might want to go the the “unrestricted multiverse,” where all possibilities are real. The problem is that it doesn’t explain the world we see, because it “explains” anything you could possibly see. Even a multiverse needs restrictions.
Monday evening there was a scheduled talk by Paul Davies, with a response by George Ellis, on “Cosmology, Ultimate Causation, and Multiverses.” But I’m pretty sure that an all-powerful and all-beneficent deity would never have intended jet-lagged scientists to attend talks that started later than 9:00 p.m., so that’s it for today.