Beyond Falsifiability

I have a backlog of fun papers that I haven’t yet talked about on the blog, so I’m going to try to work through them in reverse chronological order. I just came out with a philosophically-oriented paper on the thorny issue of the scientific status of multiverse cosmological models:

Beyond Falsifiability: Normal Science in a Multiverse
Sean M. Carroll

Cosmological models that invoke a multiverse – a collection of unobservable regions of space where conditions are very different from the region around us – are controversial, on the grounds that unobservable phenomena shouldn’t play a crucial role in legitimate scientific theories. I argue that the way we evaluate multiverse models is precisely the same as the way we evaluate any other models, on the basis of abduction, Bayesian inference, and empirical success. There is no scientifically respectable way to do cosmology without taking into account different possibilities for what the universe might be like outside our horizon. Multiverse theories are utterly conventionally scientific, even if evaluating them can be difficult in practice.

This is well-trodden ground, of course. We’re talking about the cosmological multiverse, not its very different relative the Many-Worlds interpretation of quantum mechanics. It’s not the best name, as the idea is that there is only one “universe,” in the sense of a connected region of space, but of course in an expanding universe there will be a horizon past which it is impossible to see. If conditions in far-away unobservable regions are very different from conditions nearby, we call the collection of all such regions “the multiverse.”

There are legitimate scientific puzzles raised by the multiverse idea, but there are also fake problems. Among the fakes is the idea that “the multiverse isn’t science because it’s unobservable and therefore unfalsifiable.” I’ve written about this before, but shockingly not everyone immediately agreed with everything I have said.

Back in 2014 the Edge Annual Question was “What Scientific Theory Is Ready for Retirement?”, and I answered Falsifiability. The idea of falsifiability, pioneered by philosopher Karl Popper and adopted as a bumper-sticker slogan by some working scientists, is that a theory only counts as “science” if we can envision an experiment that could potentially return an answer that was utterly incompatible with the theory, thereby consigning it to the scientific dustbin. Popper’s idea was to rule out so-called theories that were so fuzzy and ill-defined that they were compatible with literally anything.

As I explained in my short write-up, it’s not so much that falsifiability is completely wrong-headed, it’s just not quite up to the difficult task of precisely demarcating the line between science and non-science. This is well-recognized by philosophers; in my paper I quote Alex Broadbent as saying

It is remarkable and interesting that Popper remains extremely popular among natural scientists, despite almost universal agreement among philosophers that – notwithstanding his ingenuity and philosophical prowess – his central claims are false.

If we care about accurately characterizing the practice and principles of science, we need to do a little better — which philosophers work hard to do, while some physicists can’t be bothered. (I’m not blaming Popper himself here, nor even trying to carefully figure out what precisely he had in mind — the point is that a certain cartoonish version of his views has been elevated to the status of a sacred principle, and that’s a mistake.)

After my short piece came out, George Ellis and Joe Silk wrote an editorial in Nature, arguing that theories like the multiverse served to undermine the integrity of physics, which needs to be defended from attack. They suggested that people like me think that “elegance [as opposed to data] should suffice,” that sufficiently elegant theories “need not be tested experimentally,” and that I wanted to “to weaken the testability requirement for fundamental physics.” All of which is, of course, thoroughly false.

Nobody argues that elegance should suffice — indeed, I explicitly emphasized the importance of empirical testing in my very short piece. And I’m not suggesting that we “weaken” anything at all — I’m suggesting that we physicists treat the philosophy of science with the intellectual care that it deserves. The point is not that falsifiability used to be the right criterion for demarcating science from non-science, and now we want to change it; the point is that it never was, and we should be more honest about how science is practiced.

Another target of Ellis and Silk’s ire was Richard Dawid, a string theorist turned philosopher, who wrote a provocative book called String Theory and the Scientific Method. While I don’t necessarily agree with Dawid about everything, he does make some very sensible points. Unfortunately he coins the term “non-empirical theory confirmation,” which was an extremely bad marketing strategy. It sounds like Dawid is saying that we can confirm theories (in the sense of demonstrating that they are true) without using any empirical data, but he’s not saying that at all. Philosophers use “confirmation” in a much weaker sense than that of ordinary language, to refer to any considerations that could increase our credence in a theory. Of course there are some non-empirical ways that our credence in a theory could change; we could suddenly realize that it explains more than we expected, for example. But we can’t simply declare a theory to be “correct” on such grounds, nor was Dawid suggesting that we could.

In 2015 Dawid organized a conference on “Why Trust a Theory?” to discuss some of these issues, which I was unfortunately not able to attend. Now he is putting together a volume of essays, both from people who were at the conference and some additional contributors; it’s for that volume that this current essay was written. You can find other interesting contributions on the arxiv, for example from Joe Polchinski, Eva Silverstein, and Carlo Rovelli.

Hopefully with this longer format, the message I am trying to convey will be less amenable to misconstrual. Nobody is trying to change the rules of science; we are just trying to state them accurately. The multiverse is scientific in an utterly boring, conventional way: it makes definite statements about how things are, it has explanatory power for phenomena we do observe empirically, and our credence in it can go up or down on the basis of both observations and improvements in our theoretical understanding. Most importantly, it might be true, even if it might be difficult to ever decide with high confidence whether it is or not. Understanding how science progresses is an interesting and difficult question, and should not be reduced to brandishing bumper-sticker mottos to attack theoretical approaches to which we are not personally sympathetic.

This entry was posted in arxiv, Philosophy, Science. Bookmark the permalink.

40 Responses to Beyond Falsifiability

  1. Jordan Cox says:

    General Relativity posits unobservable entities (curved spacetime) to explain what we see. The multiverse is really not different from that. We didn’t see Newton’s mysterious force of gravity either; just its effects. Sounds to me like this is just a question of the underdetermination of ontology.

  2. Jayarava says:

    John Searle made an important distinction between a philosophy and a science, that is not observed by everyone. A science is all about predictions and measurement, which always involve precision, accuracy, and error. The view is to try to maximise the first two and minimise the second. A theory is useful to the extent it does these things. Whether it is “true” or not is more or less irrelevant to science. Technically Newton’s equations of motions are not true, but they are accurate and precise to some degree under some conditions and therefore useful under certain conditions.

    A philosophy is a framework for thinking about these results. A philosophy does concern itself with truth, or at least the relationship between the map and the territory. But it need not concern itself with precision and accuracy. It’s more about trying to understand the significance of the results for human beings. What kind of universe do we live in? How does that affect the way we live?

    I would say that the multiverse is a philosophical concept or framework for thinking about the world. It gives rise to science type questions the same way the a universe does. It is more important to know whether it is true, even if the practical consequences are negligible. Sometimes the truth is counter-intuitive so naive assessments about the value of theories or contributions can be misleading.

  3. Paul Hess says:

    Many theories can explain the same observation set, subject of course to additive observations that might distinguish them.

    In those cases it seems reasonable to look to the theories that made the fewest additional/unconfirmed assumptions, or some other measure of complexity.

    This perspective definitely keeps multiverse in the running, as long as it can justify its (additional?) implied complexity in terms of its explanatory trade off in our observable universe.

    NOTE: I’m not a physicist or philosopher, so any reader should take my comments as that of an interested layperson.

  4. Bunsen Burner says:

    The problem is that we have a generation of physicists that have elevated the doctrine of falsificationism to the level of religious dogma, completely at odds with what Karl Popper actually wrote. Much of what you hear coming from Smolin or Steinhardt is critiqued in undergraduate philosophy classes, but we don’t seem to have any philosophically sophisticated scientists willing to provide guidance regarding better ways to think about scientific methodology.

  5. Dr Ken Beck says:

    Bravo Sean Carroll! Encore!

    I have many friends, including the former owner and creator of Particle Zoo (Julie Peasley, who I have known personally for over two decades) who took up support of science with art, understanding that was the ONLY way to talk about some concepts that would be missed by most folks. Plushy toys are great at conveying the dynamic 4D we live in and the sub-atomic basis and uncertainty of it all, for even a 5-6 year-old to appreciate…maybe 🙂
    We need art and words and writing and artists of this 4D world to explain our ideas and gain a following, although much of the simplicity and deeper understanding are lost with the rigor of mathematics. It is never easy or complex or hard. It is rich or less rich.

    We would not keep them if all we talked about was “Trust Us, we’re physicists.” Or, “Nothing is known or can be known.” Religion and philosophy did that and failed. They have other purposes than to given us multiverses, a priori. No, and we should not expect respect if we did. We cannot predict the future with a percentage error. We cannot understand the multiverse without a firm foundation in 4D spacetime we can measure.

    We have the tools! Emmy Noether’s “Law of Conservation” (LoC) => “symmetry”. If known variables and quantity of stress-energy are unchanged, we may say that if we find a law of conservation we know to be true in our 4D spacetime, then we know its symmetry. Conversely, if we find a symmetry as we know in 4D spacetime, it has a law of conservation we also know (like mass-energy is conserved), and then we know we are still in 4D spacetime. If we cannot, and we can taken the necessary data and show it, then a discontinuity has been found. We should be prepared to understand if a singularity or discontinuity appears and disappears. We should expect them to occur! After all, how stable is 5D or 6D, or even our 4D spacetime? No one can even give a ballpark figure based on data taken. Our 4D spacetime may very well be in competition with other forms all the space and time. Thank you again!

  6. Swami says:

    Nothing wrong with considering the concept of a cosmological multiverse as a theory of Natural Philosophy.

  7. Dr Ken Beck says:


    You are correct.

    We cannot predict the future without a percentage error. We cannot understand the multiverse without a firm foundation in 4D spacetime we can measure.

  8. anon says:

    “it makes definite statements about how things are, it has explanatory power for phenomena we do observe empirically, and our credence in it can go up or down on the basis of both observations and improvements in our theoretical understanding”

    If that means it makes predictions that can be tested by observations, then it is (partially) falsifiable.
    If that does not mean it makes predictions that can be tested by observations, then the sentence is quite misleading.

  9. Art says:

    I find your comment about what is scientific unfortunately imprecise. About the multiverse, you wrote that it is scientific because: ” it makes definite statements about how things are, it has explanatory power for phenomena we do observe empirically, and our credence in it can go up or down on the basis of both observations and improvements in our theoretical understanding.” Actually, religious texts also make ‘definite statements about how things are’, providing explanations for the creation of everything about us. The one feature you add in your comment is the reference to ‘theoretical understanding’, meaning I assume there is some overarching mathematical framework. So, your argument appears to say that one can call philosophy (in the sense that Jayarava describes it) science as long as it has some mathematical framework. I don’t agree. It’s natural philosophy, with the emphasis on the latter – not science

  10. BobC says:

    As an armchair (well, couch) philosopher/physicist, I’m often forced to take a holistic (i.e., simplistic, over-generalized) view.

    First, falsifyability, as a term, is most often applied in hindsight, via the past tense, as new theories supplant prior ones. I’d posit that most new theories are “non-falisifyable” until the experimentalists have had abundant time to chew on them, and/or the theorists have had time to generalize/specialize.

    Second, there are innate tensions between philosophy, theory and practice. The boundaries are far from fixed, and concepts once securely relegated to philosophy have made their way through theory all the way to practice (though sometimes taking a century or more). It’s a continuum: There are no walls.

    When it comes to cosmological multiverses, I think it is an exciting concept that encourages us to ponder how the edges of our observable universe could be affected by adjacent regions with slightly different rules.

    I’m specifically interested in the surprisingly different values for the Hubble Constant obtained by two very different methods: First, by chaining “standard candles” (which I think of as the “near-to-far” method), and second by direct CMB measurements (“far-to-near”). Could the CMB method be affected by neighboring-region effects? Could the differences between the two methods be used to quantify possible characteristics of regions outside our observable universe?

    Of course, the differences are first being used to reexamine the chain of standard candles, because that’s where the data is. But if that reexamination yields nothing, and the standard candles burn brightly, then I fully expect the cosmological multiverse will be brought to bear. After all, the vast majority of the material present when the CMB was emitted has passed far beyond our observable horizon.

  11. Peter Woit says:

    The naive falsifiability argument you are addressing is a straw man argument. The sources you quote don’t make this argument, and you are ignoring the actual arguments they do make. I wish you would address those arguments instead of the straw man one. For more details of some of the arguments you are ignoring, see

  12. Dr Ken Beck says:

    For EVERBODY. Please indicate which comment and to whom your are directing your own comment towards. I read every comment folks make, but cannot follow who is talking to whom. I just assume everyone is talking about the author of the paper?


  13. Shecky R says:

    Astrology, homeopathy, creationism, etc. ‘make definite statements about how things are, have explanatory power for phenomena we do observe empirically, and our credence in them can go up or down on the basis of both observations and improvements in our theoretical understanding.’ Are they boringly, conventionally scientific? (…You’re setting a low bar.)

  14. Ron says:

    This is excellent, Dr. Carroll!

    I’ve always felt that Popper was hampered by his strict frequentist interpretation of probability. From a frequentist approach, of course “confirmation” of theories is impossible, because probability doesn’t refer degrees of confidence. But from a Bayesian perspective, it is trivially easy to show how theories can be confirmed by evidence. And it is no problem for the Bayesian that proponents of theories are able to, as you say, “come up with a story within the appropriate theory that seemed to fit all of the evidence,” because these stories can only help to explain data (i.e., increase the likelihood of data on the hypothesis) by introducing auxiliary hypotheses that simultaneously drive down the prior probability of the theory. In other words, Bayesianism shows exactly why the introduction of these ad hoc stories can’t help theories avoid disconfirmation from contrary evidence.

  15. Marcelo says:

    It might be wise to go back to first principles.
    Let’s say we have a “Theory of Roulette” whose prediction is: “The next number will be uniformly distributed in the interval 0..36”.
    According to Popper, such -unfalsifiable- theory would be “unscientific” not because it is untrue or inconsistent or lacks predictive power. It is unscientific because it is useless.
    A useful roulette theory would deliver information one would like to pay consulting fees for. For example, it would assert deviations from the uniform distribution. In so doing, it becomes falsifiable, because *in principle* one can go to a casino and run a battery of statistical tests, whereas in the case of unfalsifiability, *in principle*, whatever the outcome, it will be right, but then no subtle anti-popperian philosopher would put his/her money where his/her mouth is.
    The whole point of a Science, as Popper chose to define it, should not to be “right” in this sense, but to be “right” about things that could have been otherwise.
    Let us recall the comparison Popper drew between GR and Psychoanalysis and its declensions. GR made a non-obvious predicion of an as yet unobserved phenomenon. Even better: a quantitative prediction. If things had turned out to be different, the theory would have been disproven. It was disprovable because it pre-dicted (i.e., took the risk of saying *in advance* what was going to happen).
    This situation was in stark contrast with Psychoanalysis, only able to explain things only after they are observed -whatever the observation. By construction, no counterexample is conceivable, it cannot *in principle* be put to test. It is always “right” but, because of this a priori invulnerability, its information contents amounts to nothing, like the roulette.
    There is a continuum of falsifiability. Popper noticed that the more falsifiable a theory, the more informative it was. In other words: the more precise the predictions, the more we know. This can also be understood in an information-theoretic sense.
    We should not confuse interpretation with predictive power. Feynman didn’t feel the need to read the existence of a curved space-time from the GR equations, but he certainly surrendered to its numerical predictions: clocks, rods and all that. In the same vein, there might be no need to interpret the Multiverse at face value but only as a heuristic approach to new equations. A legitimate approach, insofar as it pre-dicts something new instead of just repeating what we already know. It must prove itself fruitful, to prove that it adds value. Idly postulating universes is just… postulating, with the added deleterious effect that we now feel the need to discuss if Psychoanalysis and Homeopathy have not been a fountain of knowledge all along…

  16. Haruki Chou says:

    “…a certain cartoonish version of his views has been elevated to the status of a sacred principle, and that’s a mistake.”
    Same thing happened with Aristotle’s views, which became dogmas for centuries in the hands of others.

  17. Alan says:

    Occurs to me that the multiverse relies on naturalism to be true to an arbitrary future time, an unwarranted assumption and something hoped for by many, yet people’s anomalous experiences (the room elephant and we all know what they are) indicate otherwise. So another form of inquiry is required. They are probably not even compatible with the multiverse.

  18. anon says:

    My bet is the vast majority of physicists dislike the multiverse AND have never heard of Popper in their lives.

  19. I find it odd that someone who promotes Bayesian inference would argue against falsifiability, as Bayesian inference provides a clear criterion for how a hypothesis is falsified, at least in principle. Let D stand for our data, H0 the hypothesis of interest, and H1 any other alternative hypothesis (H0 and H1 are mutually exclusive). Then if

    Pr(D | H0) / Pr(D | H1)

    is very small, we may conclude that H0 is, with high probability, falsified. See “Closed Worlds and Bayesian Inference” at

    There are, of course, practical difficulties here: 1) the difficulty of constructing defensible priors for H0, H1, and their parameters, and 2) the computational difficulty of computing Bayes’ factors.

  20. KC Lee says:


    Your statement “of course in an expanding universe there will be a horizon past which it is impossible to see” applies to the observable horizon and the event horizon. At both, to observers on one side, things on the other side have become impossible to see.

    When applied to both sides symmetrically, a “global” (God’s eye) view shows no change in the nature of “things” on either side. The horizon is only a “local” phenomenon arising simply because observers are limited by the speed of light.

    If in fact there is no real change across a horizon, there is no need to assume different physics, universes etc.?


  21. Neil says:

    It seems to me that the problem with the multiverse is that it is unconstrained. A good theory should not be slack. It should constrain what can happen. And what we can observe. I don’t see the multiverse theory, if that is what it is, doing that.

  22. C. A. Martinson says:

    I will not comment here about the logic of inferences. But you and your readers may find the following article of some use.

  23. Paul Hayes says:

    Kevin S. Van Horn,

    Surely we may only conclude that H0 is, with high probability, falsified relative to H1? Wouldn’t it be better to just drop the word (because of the bad ideas about probability and inference which it carries with it)?

  24. Bruce says:

    In regard to what Popper was concerned with, a previous post by Peter Monnerjahn is helpful. Nevertheless, I suggest that most scientists today consider falsification as a (or the) key tool for distinguishing theories which are more likely to be true from those in which are less certain. This, I think, is the key issue – not the demarcation of science from non-science, but of trustworthy science from speculation.

  25. Paul: no H0 is falsified, period. Allowing for additional alternatives only reduces the posterior probability of H0 further. That is,

    Pr(H0 | D, H0 or H1) <= Pr(H0 | D, H0 or H1 or … or Hn)

    You only need to find ONE alternative that is much more probable than H0 to conclude that H0 is highly improbable.