Philosophy and Cosmology: Slow Live-Blogging

Greetings from Oxford, a charming little town across the Atlantic with its very own university. It’s in the United Kingdom, a small island nation recognized for its steak and kidney pie and other contributions to world cuisine. What you may not know is that the UK has also produced quite a few influential philosophers and cosmologists, making it an ideal venue for a small conference that aims to bring these two groups together.

george_ellis The proximate reason for this particular conference is George Ellis’s 70th birthday party. Ellis is of course a well-known general relativist, cosmologist, and author. Although the idea of a birthday conference for respected scientists is quite an established one, Ellis had the idea of a focused and interdisciplinary meeting that might actually be useful, rather than just bringing together all of his friends and collaborators for a big party. It’s to his credit that they invited as many multiverse-boosters as multiverse-skeptics. (I would go for the party, myself.)

George is currently very interested and concerned by the popularity of the multiverse idea in modern cosmology. He’s worried, as many others are (not me, especially), that the idea of a multiverse is intrinsically untestable, and represents a break with the standard idea of what constitutes “science.” So he and the organizing committee have asked a collection of scientists and philosophers with very different perspectives on the idea to come together and hash things out.

It appears as if there is working wireless here in the conference room, so I’ll make some attempt to blog very briefly about what the different speakers are saying. If all goes well, I’ll be updating this post over the next three days. I won’t always agree with everyone, of course, but I’ll try to fairly represent what they are saying.

Saturday night:

Like any good British undertaking, we begin in the pub. I introduce some of the philosophers to Andrei Linde, who entertains us by giving an argument for solipsism based on the Wheeler-deWitt equation. The man can command a room, that’s all I’m saying.

(If you must know the argument: the ordinary Schrodinger equation tells us that the rate of change of the wave function is given by the energy. But for a closed universe in general relativity, the energy is exactly zero — so there is no time evolution, nothing happens. But you can divide the universe into “you” and “the rest.” Your own energy is not zero, so the energy of the rest of the universe is not zero, and therefore it obeys the standard Schrodinger equation with ordinary time evolution. So the only way to make the universe real is to consider yourself separate from it.)

Sunday morning: Cosmology

9:00: Ellis gives the opening remarks. Cosmology is in a fantastic data-rich era, but it is also coming up against the limits of measurement. In the quest for ever deeper explanation, increasingly speculative proposals are being made, which are sometimes untestable even in principle. The multiverse is the most obvious example.

Question: are these proposals science? Or do they attempt to change the definition of what “science” is? Does the search for explanatory power trump testability?

The questions aren’t only relevant to the multiverse. We need to understand the dividing line between science and non-science to properly classify standard cosmology, inflation, natural selection, Intelligent Design, astrology, parapsychology. Which are science?

9:30: Joe Silk gives an introduction to the state of cosmology today. Just to remind us of where we really are, he concentrates on the data-driven parts of the field: dark matter, primordial nucleosynthesis, background radiation, large-scale structure, dark energy, etc.

Silk’s expertise is in galaxy formation, so he naturally spends a good amount of time on that. Theory and numerical simulations are gradually making progress on this tough problem. One outstanding puzzle: why are spiral galaxies so thin? Probably improved simulations will crack this before too long.

10:30: Andrei Linde talks about inflation and the multiverse. The story is laden with irony: inflation was invented to help explain why the universe looks uniform, but taking it seriously leads you to eternal inflation, in which space on extremely large (unobservable) scales is highly non-uniform — the multiverse. The mechanism underlying eternal inflation is just the same quantum fluctuations that give rise to the density fluctuations observed in large-scale structure and the microwave background. The fluctuations we see are small, but at earlier times (and therefore on larger scales) they could easily have been very large — large enough to give rise to different “pocket universes” with different local laws of physics.

Linde represents the strong pro-multiverse view: “An enormously large number of possible types of compactification which exist e.g. in the theory of superstrings should be considered a virtue.” He said that in 1986, and continues to believe it. String theorists were only forced to take all these compactifications seriously by the intervention of a surprising experimental result: the acceleration of the universe, which implied that there was no magic formula that set the vacuum energy exactly to zero. Combining the string theory landscape with eternal inflation gives life to the multiverse, which among other things offers an anthropic solution to the cosmological constant problem.

Still, there are issues, especially the measure problem: how do you compare different quantities when they’re all infinitely big? (E.g. number of different kinds of observers in the multiverse.) Linde doesn’t think any of the currently proposed measures are completely satisfactory, including the ones he’s invented. A big problem with Boltzmann brains.

Another problem is what we mean by “us,” when we’re trying to predict “what observers like us are likely to see.” Are we talking about carbon-based life, or information-processing computers? Help, philosophers!

Linde thinks that the multiverse shows tendencies, although not cut-or-dried predictions. It prefers a cosmological constant to quintessence, and increases the probability that axions rather than WIMPs are the dark matter. Findings to the contrary would be blows to the multiverse idea. Most strongly, without extreme fine-tuning, the multiverse would not be able to simultaneously explain large tensor modes in the CMB and low-energy supersymmetry.

12:00: Raphael Bousso talks about the multiverse in string theory. Note that “multiverse” isn’t really an accurate description; we’re talking about connected regions of space with different low-energy excitations, not some metaphysical collection of completely distinct universes. The multiverse is not a theory — need some specific underlying dynamics (e.g. string theory) to make any predictions. It’s those theories that are tested, not “the multiverse.” Predictions will be statistical, but that’s okay; everyone’s happy with statistical mechanics. “Even if you were pretty neurotic about it, you could only throw a die a finite number of times.” We do need to assume that we are in some sense typical observers.

The cosmological constant problem (why is the vacuum energy so small?) is an obvious candidate for anthropic explanation. String theory is unique at a deep-down level, but features jillions of possible compactifications down to four dimensions, each with different low-energy parameters. Challenges to making predictions: landscape statistics (how many of each kind of vacua?), cosmological dynamics (how does the universe evolve?), measure problem (how do we count observers?). Each is hard!

For the cosmological constant, the distribution of values within the string landscape is actually relatively understandable: there is basically a uniform distribution of possible vacuum energies between minus the Planck scale and plus the Planck scale. Make the right vacua via eternal inflation, which populates the landscape. Our universe decayed from a previous vacuum, an event that must release enough energy to produce the hot Big Bang. That’s a beneficial feature of the multi-dimensional string landscape: “nearby” vacua can have enormously different vacuum energies.

The measure problem is trickier. In the multiverse, any interesting phenomenon happens an infinite number of times. Need some sort of way to regularize these infinities. A problem for eternal inflation; not only for the string landscape. Bousso’s favorite solution is the causal patch measure, which only counts events that happen in the past light cone of any particular event, not throughout a spacelike surface. In that measure, most observers see a cosmological constant comparable to the age of the universe they observe — that’s compatible with what we see, and directly solves the coincidence problem.

Sunday afternoon: Philosophers’ turn

2:00: John Norton talks about the “Bayesian failure” of cosmology and inductive inference. (He admits off the bat that it’s kind of terrifying to have all these cosmologists in the audience.) Basic idea: the Bayesian analysis that cosmologists use all the time is not the right tool. Instead, we should be using “fragments of inductive logics.”

The “Surprising Analysis”: assuming that prior theory is neutral with respect to some feature (e.g. the value of the cosmological constant), we observe a surprising value, and then try to construct a framework to explain it (e.g. the multiverse). This fits in well with standard Bayesian ideas. But that should worry you! What is really the prior probability for observing some quantity? In particular, what if our current theory were not true — would we still be surprised?

We shouldn’t blithely assume that the logic of physical chances (probabilities) is the logic of all analysis. The problem is that this framework has trouble dealing with “neutral evidence” — almost everything is taken as either favoring or disfavoring the hypothesis. We should be talking about whether or not a piece of evidence qualifies as support, not simply calculating probabilities.

The disaster that befell Bayesianism was to cast it in terms of subjective degrees of belief, rather than support. A prior probability distribution is pure opinion. But your choice of that prior can dramatically effect how we interpret particular pieces of evidence.

Example: the Doomsday argument — if we are typical, the universe (or the human race, etc.) will probably not last considerably longer than it already has (or we wouldn’t be typical). All the work in that argument comes from assuming that observers are sampled uniformly. But the fact that 60 billion people have lived so far isn’t really evidence that 100 trillion people won’t eventually live; it’s simply neutral.

Heretical punchline: cosmic parameters can’t be judged as “improbable,” so long as they’re consistent with theory and observation.

[David Wallace, during questions: Do you really mean to say that if we observed the stars in the sky spelling out the message “Oxford is better than Cambridge,” all we could say is “Well, it’s consistent with the laws of physics, so we can’t really conclude anything from that”?]

2:45: Simon Saunders talks about probability and anthropic reasoning in a multiverse. Similar issues as the last talk, but he’ll be defending a Bayesian analysis.

Sometimes we think of probability objectively — a true physical structure — and sometimes subjectively — a reflection of the credence we give to some claim.

Problems with anthropic arguments involve: linguistics (what is included in “observer” and “observed”?), theory (how do we calculate the probability of finding certain evidence given a particular theory?), and realism (why worry about what is observed?). On the latter point: do we conditionalize on the existence of human life, or the existence of some observers, or simply on the existence of conditions compatible with observers? Saunders argues for the latter: all we care about are physical conditions, not whether or not observers come into existence. Call this “taming” the anthropic principle.

Aside on vacuum energy: are we really sure it’s finely-tuned? Condensed-matter analogues give a different set of expectations — maybe the vacuum energy just adjusts to zero after phase transitions.

For branches of the wavefunction, there exist formal axioms (e.g. Deutsch-Wallace) for evaluating the preferences of a rational observer which recover the conventional understanding of Copenhagen probabilities even in a many-worlds interpretation. For a classical multiverse, the argument is remarkably similar; for a fully quantum inflationary multiverse, it’s less clear.

4:00: Panel discussion with Alex Vilenkin, Wayne Myrvold, and Christopher Smeenk. It’s a series of short talks more than an actual discussion. Vilenkin goes first, and discusses — wait for it — calculating probabilities in the multiverse. The technical manifestation of the assumption that we are typical observers is the self-sampling assumption: assume we are chosen randomly from within some reference class of observers. The probability that we observe something is just the fraction of observers within this class that observe it. But how do we choose the class? Vilenkin argues that we can choose the set of all observers with identical information content. (Really all information: not just “I am human” but also “My name is Alex,” etc.) That sounds like a very narrow class, but in a really big multiverse there will be an infinite number of members of each such class. (This doesn’t solve the measure problem — still need to regulate those infinities.) In fact we should use the Principle of Mediocrity: assume we are typical in any class to which we belong, unless there is evidence to the contrary.

Myrvold is next up, and he chooses to respond mostly to John Norton’s talk. Most of the time, when we’re evaluating theories in light of evidence, we don’t need to be too fancy about our background analytical framework. The multiverse seems like an exception. More generally, theories with statistical predictions are tricky to evaluate. If you toss a coin 1000 times, any particular outcome is highly improbable. You have to choose some statistics ahead of time, e.g. the fraction of heads. Cosmological parameters might be an example of where we don’t know how to put sensible prior probabilities on different outcomes.

Smeenk wants to talk about how big a problem “fine-tuning” really is. Sometimes more than others: when some parameter (e.g. the vacuum energy) is not just chosen from a hat, but gets various contributions about which we have sensible expectations, it’s giving up on too much to simply take any observed value as neutral with respect to theory evaluation. He’s reacting against Norton’s prescription a bit. Nevertheless, we should admit that choosing measures/probability distributions in cosmology is a very different game than what we do in statistical mechanics, if only because we don’t actually have more than one member of the ensemble in front of us.

Sunday evening: “Ultimate Explanation”

After dinner we reconvene for a talk and a response. The talk is by Timothy O’Connor on Ultimate Explanation: Reforging Natural Philosophy.” He reminds us that Newton insisted that he did not “feign hypotheses” — he concentrated on models that he claimed were deduced from the phenomena, and thought that any deeper hypothetical explanations had “no place in experimental philosophy.” The implication being that Newton would not have approved of the multiverse.

O’Connor says that a fully complete, “ultimate” explanation cannot possibly be attained through science. Nevertheless, it’s a perfectly respectable goal, as part of the larger project of natural philosophy.

He defines an “ultimate explanation” as something that involves no brute givens — “such that one could not intelligibly ask for anything more.” That’s not attainable by science. If nothing else, “the most fundamental fact of existence itself” will remain unexplained, even if we knew the theory of everything and the wave function of the universe. Alternatively, if we imagine “plenitude” — everything possible exists — it would still be possible to imagine something less than that, so a contingent explanation is still required.

We are led to step outside science and consider the idea of an ultimately necessary being — something whose existence is no more optional than that of mathematical or logical truths. We could endorse such an idea if it provided explanations without generating insoluble puzzles of its own, and if we thought we had considered an exhaustive list of alternatives, all of which fell short. Spinoza and Leibniz are invoked.

Note the peculiar logic: if a necessary being does not exist, it was not simply an optional choice; it must necessarily not exist. (Because if a necessary being is conceivable, it must necessarily exist. Get it?)

Punchline: Science is independent of any/most metaphysical claim. But that means it can’t possibly “explain” everything; there must be metaphysical principles/assumptions. Some of these might be part of the ultimate explanation of the actual world in which we live.

The response comes from Sir Martin Rees. He opens by quoting John Polkinghorne: “Your average quantum mechanic is no more philosophical than your average motor mechanic.” But maybe cosmologists are a bit more sympathetic. He then recalls that Dennis Sciama — who was the thesis advisor of George Ellis, Stephen Hawking, and Rees himself — was committed as a young scientist to the Steady State model of cosmology, primarily for philosophical reasons. He did give up on it when confronted with data from the microwave background, but it was an anguished abandonment.

Searching for “explanations,” we should recognize that different fields of science have autonomous explanatory frameworks. People who study fluid mechanics would like to understand turbulence, and they don’t need to appeal to the existence of atoms to do so — atoms do exist, and one can derive the equations of fluid mechanics from them, but their existence sheds no light whatsoever on the phenomenon of turbulence.

Note also that, even if there is an ultimate explanation in the theory-of-everything sense, it may simply be too difficult for our limited human minds to understand. “Many of these problems may have to await the post-human era for their solution.”

Continued at Day Two.

29 Comments

29 thoughts on “Philosophy and Cosmology: Slow Live-Blogging”

  1. “He then recalls that Dennis Sciama — who was the thesis advisor of George Ellis, Stephen Hawking, and Rees himself “. Add to that list James Binney, Brandon Carter, John Barrow, David Deutsch, Gary Gibbons. He had at least 32 doctoral students:

    http://genealogy.math.ndsu.nodak.edu/id.php?id=72653

    I got there via Peter Coles–John Barrow–Dennis Sciama. Sciama’s advisor was Paul Dirac, by the way.

    IIRC, Sciama was never promoted to the rank of Professor in the UK.

  2. I just read Norton’s writeup of his talk, and as a hardcore subjective Bayesian, I found it very unconvincing. Some of his key points seemed to have the form of reductio ad absurdum arguments, but the “absurda” seemed to me obviously and uncontroversially true.

    His first key argument is against the idea that states of knowledge can be represented as probabilities. The argument is that a certain kind of state of knowledge (or rather ignorance) called “evidential neutrality” can’t be self-consistently represented as a set of probabilities for a bunch of propositions. I think that’s probably true, but to me that’s an argument that evidential neutrality isn’t a useful concept, not that probabilities are ill-defined.

  3. Pingback: Cranks Anonymous « In the Dark

  4. Pingback: Speculative Science and Speculative Philosophy « Hyper tiling

Comments are closed.

Scroll to Top