Maybe We Do Not Live in a Simulation: The Resolution Conundrum

Greetings from bucolic Banff, Canada, where we’re finishing up the biennial Foundational Questions Institute conference. To a large extent, this event fulfills the maxim that physicists like to fly to beautiful, exotic locations, and once there they sit in hotel rooms and talk to other physicists. We did manage to sneak out into nature a couple of times, but even there we were tasked with discussing profound questions about the nature of reality. Evidence: here is Steve Giddings, our discussion leader on a trip up the Banff Gondola, being protected from the rain as he courageously took notes on our debate over “What Is an Event?” (My answer: an outdated notion, a relic of our past classical ontologies.)


One fun part of the conference was a “Science Speed-Dating” event, where a few of the scientists and philosophers sat at tables to chat with interested folks who switched tables every twenty minutes. One of the participants was philosopher David Chalmers, who decided to talk about the question of whether we live in a computer simulation. You probably heard about this idea long ago, but public discussion of the possibility was recently re-ignited when Elon Musk came out as an advocate.

At David’s table, one of the younger audience members raised a good point: even simulated civilizations will have the ability to run simulations of their own. But a simulated civilization won’t have access to as much computing power as the one that is simulating it, so the lower-level sims will necessarily have lower resolution. No matter how powerful the top-level civilization might be, there will be a bottom level that doesn’t actually have the ability to run realistic civilizations at all.

This raises a conundrum, I suggest, for the standard simulation argument — i.e. not only the offhand suggestion “maybe we live in a simulation,” but the positive assertion that we probably do. Here is one version of that argument:

  1. We can easily imagine creating many simulated civilizations.
  2. Things that are that easy to imagine are likely to happen, at least somewhere in the universe.
  3. Therefore, there are probably many civilizations being simulated within the lifetime of our universe. Enough that there are many more simulated people than people like us.
  4. Likewise, it is easy to imagine that our universe is just one of a large number of universes being simulated by a higher civilization.
  5. Given a meta-universe with many observers (perhaps of some specified type), we should assume we are typical within the set of all such observers.
  6. A typical observer is likely to be in one of the simulations (at some level), rather than a member of the top-level civilization.
  7. Therefore, we probably live in a simulation.

Of course one is welcome to poke holes in any of the steps of this argument. But let’s for the moment imagine that we accept them. And let’s add the observation that the hierarchy of simulations eventually bottoms out, at a set of sims that don’t themselves have the ability to perform effective simulations. Given the above logic, including the idea that civilizations that have the ability to construct simulations usually construct many of them, we inevitably conclude:

  • We probably live in the lowest-level simulation, the one without an ability to perform effective simulations. That’s where the vast majority of observers are to be found.

Hopefully the conundrum is clear. The argument started with the premise that it wasn’t that hard to imagine simulating a civilization — but the conclusion is that we shouldn’t be able to do that at all. This is a contradiction, therefore one of the premises must be false.

This isn’t such an unusual outcome in these quasi-anthropic “we are typical observers” kinds of arguments. The measure on all such observers often gets concentrated on some particular subset of the distribution, which might not look like we look at all. In multiverse cosmology this shows up as the “youngness paradox.”

Personally I think that premise 1. (it’s easy to perform simulations) is a bit questionable, and premise 5. (we should assume we are typical observers) is more or less completely without justification. If we know that we are members of some very homogeneous ensemble, where every member is basically the same, then by all means typicality is a sensible assumption. But when ensembles are highly heterogeneous, and we actually know something about our specific situation, there’s no reason to assume we are typical. As James Hartle and Mark Srednicki have pointed out, that’s a fake kind of humility — by asserting that “we are typical” in the multiverse, we’re actually claiming that “typical observers are like us.” Who’s to say that is true?

I highly doubt this is an original argument, so probably simulation cognoscenti have debated it back and forth, and likely there are standard responses. But it illustrates the trickiness of reasoning about who we are in a very big cosmos.

This entry was posted in Philosophy, Science. Bookmark the permalink.

102 Responses to Maybe We Do Not Live in a Simulation: The Resolution Conundrum

  1. Steve Ruis says:

    I am constantly warning students about the problem of small data sets. This is what we are seeing here. Religious-types as well as science-types all base their thinking on our vast experience gathered on one planet of one star in a universe with over 100 billion galaxies containing 100 plus billion stars each and untold numbers of planets associated with each star. From our experience we extrapolate! And as any scientist can tell you, extrapolation is a very weak tool, nowhere near as good as interpolation.

    So, could we all be living in a simulation? Sure, why not? Could the universe be very young? Sure, it might have been made 15 minutes ago by a omnipotent being who implanted false memories in all of our heads. But really … are we going to spend a lot of our time exploring possibilities? Really? What a waste.

  2. James says:

    I think this is true for cellular automata simulations, but not so-called “particle database” simulations where each component of the universe is stored without any special space-associated framework. There is a continuum of hybrids.

    The practical advantage of discussing simulations is the ability to support agnosticism (if you simulated a universe on your computer, would you want the simulants to worship you and waste time praying?) which in turn supports education in evolutionary genetics and radiochemistry, e.g.,

  3. BobC says:

    Even limited compute resources are infinite in time. Ergo, “slowing down time” makes all simulations equivalent, independent of their level in the simulation hierarchy.

    An Arduino, given enough time (and storage resources), can do anything any supercomputer can do. Of course, our own universe may reach heat death before anything useful is computed by an Arudino simulating a supercomputer…

    The fundamental question becomes not where we are in the simulation hierarchy, but what our time rate is compared to our adjacent hierarchy levels. And since we observe time flowing at “1 second per second”, we should be able to assume/infer that ALL simulations may observe their own time flow as being that same (subjective) rate.

    Is taking a strictly computational approach to simulation adequate? What is missing? Do we need to consider factors such as dimensionality, particularly regarding holographic projections into higher dimensions?

  4. Ben Goren says:

    There are two ways of looking at these sorts of conundrums that are particularly useful.

    The first is to observe that, as with all conspiracies, it’s impossible to disprove — no matter what sort of superpowers you might imagine an entity might have. The programmers of the Matrix may well themselves be simulated by a super-Matrix, and similar logic as is used to demonstrated the validity of Turing’s Halting Problem will similarly demonstrate that this one can’t be solved any better. Even if you want to go fully theological, Jesus can’t rule out the possibility that Satan is even more powerful than Jesus and Satan is just making Jesus think Jesus is in charge. Ultimately, all conspiracy theories follow this pattern…maybe the best of reality is that the CIA is controlling us via our dental implants, but the CIA is a program on the Holodeck, and the Red King from Alice in Wonderland is Dreaming all of the Star Trek universe, including its Holodeck, into existence.

    The other, even more important bit to realize…is that, even if we are “dupes” of some sort of grand conspiracy theory…the “fake” reality we find ourselves in is an absolutely wonderful playground to be able to run around in. Who cares if there’s something even bigger “outside,” when we’ve got a baker’s dozen billion years and hundreds of billions of light years already so far beyond our reach, down to scales and up to energies even our best experiments can only barely begin to hint at?

    I mean, sure, it’d be nifty to know that the Schrödinger Equation is really just a subroutine of some hyper-ultra-mega-super-dupercomputer…but would that really change the way we live our lives?

    So, I remain as unconvinced by this favored-of-the-day conspiracy theory as I’m unconvinced by all the others throughout history. But, even if we do some day find good reason to believe that such-and-such a conspiracy is really true…it’ll have as much impact on my daily life as a confirmation of SUSY (or whatever) coming out of CERN will.



  5. James Cross says:

    Why would there be higher and lower levels?

    That seems to be assuming that the finite universe we think we know is actually finite as opposed to simulated finite. Our own civilization or other civilizations in our simulation might be able to access more computing resources than are apparently available to us now in this simulation. For all we know all civilizations might have access to infinite resources but can only create simulations of universes that seem finite.

  6. James says:

    What if the resolution that is getting poorer as you go down the chain is the time variable?

    Let’s say I develop a computer which simulates a universe. But it can only simulate that universe relatively slowly, ie slower than the universe that is doing the simulating (which intuitively makes sense to me). Say it would take a year of “real” universe time to simulate a month of simulated time. That simulated universe would then have someone also created a similar simulated universe with the same constraints from their perspective. From our perspective, they are operating at a lower resolution, taking 12 of our years to simulate a month of time, but from their perspective they have only taken a year.

  7. Ben Goren says:

    An Arduino, given enough time (and storage resources), can do anything any supercomputer can do. Of course, our own universe may reach heat death before anything useful is computed by an Arudino simulating a supercomputer…

    The Ardiuno doesn’t have anywhere near enough storage to simulate the supercomputer, even in principle. Of necessity, for one computer to completely simulate another, the one doing the simulating needs at least a little bit more memory that the one being simulated.

    And computers aren’t magic Platonic entities free from the laws of physics, even if it’s often very convenient to pretend that they are. The computer is still going to be subject to thermodynamic limits, still going to need energy to do its bit, still subject to wear and tear, and so on. It might be superbly efficient, but its efficiency is still less than perfect.

    There’s a much bigger question of how detailed you want the simulation to be.

    On the one hand, one could posit a simulation just enough computationally bigger than your brain and sensory input to fool you — meaning you’re also fooled about the existence of CERN and all the physics they’ve discovered. The LHC doesn’t exist; all that exists of it is your memory of reading about it somewhere.

    That’s a rather extreme example, but one that you can no more rule out than its other opposite extreme…that the simulation covers the entire Universe down to Planck scales. But that would mean that the “real” or “outside” universe is vastly bigger than the one we observe, to the point that it hardly makes sense to describe the computer doing the simulation in such terms. It would make much more sense to describe it as a branch in a cosmological multiverse, for it’s certain that, whatever it is, it’s not made of anything remotely like the silicon-based transistors we know and love. The physics of such a “real” universe would be radically different from ours, because any chemistry at the scale of digital computers would long since collapse into a black hole before you could make it big enough for such a task.

    Thanks to chaos and similar concepts, even a city-sized simulation at the level of detail that physicists have probed would need computers that likely couldn’t be built with the physics of our universe — which gets right to one of Sean’s points: that the assumption that, if simulated, we’re typical, is a bad assumption.

    (If you assume an actively deceptive simulation, one that watches for people attempting to perform detailed physics experiments and directly manipulates their perceptions without simulating the physics of the experiment in detail, you can get away with a lot more. But now we’re getting into levels of paranoia that are better addressed by a competent mental health professional than by serious academics.)



  8. Ben Goren says:

    Our own civilization or other civilizations in our simulation might be able to access more computing resources than are apparently available to us now in this simulation.

    That contradicts assumption 5 in Sean’s summary of the argument, which says that we should assume we’re typical. We’re nowhere near having access to infinite computing resources; indeed, such would be a gross violation of all our conservation laws. If that’s “typical,” then we most emphatically are atypical. However, it’s at best supremely difficult to construct a coherent physics that lacks conservation, so we should be confident that, whatever the ultimate nature of reality, conservation in some form or another is really real.

    The smart money never bets against thermodynamics….



  9. James Cross says:


    Sean himself stated that assumption 5 was without justification.

    Conservation and thermodynamics are just part of the simulation to make things interesting.

  10. Ben Goren says:

    James, I don’t think you can dismiss conservation and thermodynamics so casually. A great deal of it is as logically essential and obvious as, for example, the Bell curve distribution you get from analyzing fair coin tosses.

    Worse…we know of a great deal of “stuff” that is overwhelmingly finite, and the finite and infinite really don’t mix and match well. If you had infinite money in your bank account, you wouldn’t be able to spend any of it because it would be worthless.

    That’s part of why Lawrence Krauss’s Universe from “Nothing” is so exciting. As he notes, the total energy of the Universe is zero — meaning that the books balance. And if the books balance, then you can spend the money in your bank account, because it’s actually worth something.

    Ignoring the simulation topic for a moment, I’d suggest that, even if time and / or space are infinite, something akin to the way the Born rule emerges from the Many-Worlds interpretation would have to apply. That is, maybe there’re infinite instances of everything happening, including such absurdities as the air in your room “randomly” spontaneously rearranging itself into a fire-breathing dragon. But there’re so many more infinite instances of the air in your room behaving reasonably that, for all practical purposes, those infinite universes with dragons might as well not exist. Similarly, there may well be an infinite number of instances of Hamlet encoded in ASCII in π, but they’re so few and far between the random-seeming gibberish that they might as well not be there.

    Some variation on that theme, it would seem to me, would be the only hope one might have for creating the universe we actually have and live in from an infinite framework of any kind — including any sort of computer-with-infinite-resources simulation.



  11. Paul Clapham says:

    Seems to me that the resolution conundrum could be explained like this: it’s only a conundrum if the simulation produces an accurate version of the simulators’ universe. But we can see that isn’t the case because our simulation isn’t even competent. (Most proponents of the simulation hypothesis assume competent simulators as far as I can see.)

    Consider the galaxies. Our simulators set them to rotate in a simple way, but then some astronomers noticed that wasn’t the right way for galaxies to rotate. So a bug report was filed and the simulators had to come up with a feature which made it right. Hence dark matter. A valiant patch attempt, but now it seems that other astronomers are noticing that it has its own problems. No doubt the simulators are working on another feature to make dark matter work right, but it doesn’t look like they have implemented one yet.

  12. James Cross says:


    Half of the people can be part right all of the time
    Some of the people can be all right part of the time
    But all of the people can’t be all right all of the time
    I think Abraham Lincoln said that
    “I’ll let you be in my simulations if I can be in yours”

  13. AJ Hill says:

    If we exist in a simulation, when did that simulation begin? Did it really start running (in simulation time) with the Big Bang nearly fourteen billion years ago or was all or most of the history of the universe merely an initial condition imposed at some intervening start point? If the simulation has actually run during the nine billion years that elapsed before our solar system formed, did the simulators monitor other parts of the universe, where presumably more interesting things were happening. And, when things finally started rolling in our microscopically small corner of creation, did “they” actually pay attention, while the dinosaurs roamed the Earth, eating each other, procreating, and doing precious little else for two hundred million years? (For that matter was Chicxulub a gesture of impatience?) If we make the stupendously egotistical assumption that humanity has been the object of the simulation all along (something I find almost impossible to believe), when did we become interesting enough to justify the exercise? Or have we? Is it possible or even likely that humanity is nothing more than a trivial epiphenomenon and that the really significant developments – the point of it all – are taking place elsewhere or elsewhen in the universe?

  14. Max says:

    “Hopefully the conundrum is clear. The argument started with the premise that it wasn’t that hard to imagine simulating a civilization — but the conclusion is that we shouldn’t be able to do that at all. This is a contradiction, therefore one of the premises must be false.”

    … and at any rate the proponents of the simulation break even. If we will be able to simulate a civilization, that will be taken as an evidence that we ourselves live in a simulation (not the lowest-level one). If we will never manage to simulate a civilization, that will serve as evidence that we are in the lowest-level simulation. Amen 🙂

  15. marten says:

    Can we live in a simulation if there is no space for determinism in the universe?

  16. Ben Goren says:

    Did it really start running (in simulation time) with the Big Bang nearly fourteen billion years ago or was all or most of the history of the universe merely an initial condition imposed at some intervening start point?

    If we’re going to take that sort of line of reasoning seriously, then we also have to take seriously the Laplacian perspective along with the entropic arrow of time and probably Everettian Many-Worlds as well — in which sense a sequential recreation of the history of the Universe likely doesn’t even make sense to begin with. You’d pretty much have to have all of everything existing simultaneously in some form or another. I don’t think there’s any meaningful way in which you could do that thorough an approximation of physics piecemeal or sequentially.

    An actual physics-level simulation is pretty much absolutely off the table — which means the only remaining reasonable alternatives are actively and heavily conspiratorial, such that, at some level, any simulated individuals are deceived about the results of the experiments at CERN and NASA and elsewhere. Maybe the simulation is directly generating the data for the detectors at CERN without bothering to simulate collisions; maybe it’s just tricking the researchers at CERN into thinking they ran the experiment; maybe it’s just fooling me into thinking that CERN is real and I’m the only person in the simulation.

    None of those possibilities are very useful theories, and there’s certainly no evidence supporting any of them. But they’re the least implausible ones open to proponents of the simulation hypothesis.



  17. eric says:

    Agree with 1 and 5 being problematic, especially considering that out of the ~200,000 years humans have been on the planet, we’ve only been able to run computational simulations for about 60 years. That means that regardless of what imaginary activities other imaginary civilizations might do, the empirical evidence we have at hand points to (a) simulation is not easy and (b) 21st century humans are not ‘typically situated.’ We are very atypical humans.

    6 is also problematic because it assumes the simulated people would have the sentience or awareness enough to be numbered among the observers. Again, all the evidence we have so far disagrees with this; while we certainly can create all sorts of simulations, out of the approximately 7 billion observers we know about, all of them are ‘top level’ and none of them are simulations, because our simulated people don’t have the capacity to see themselves as observers. Maybe its the case that AI-type simulations are so hard that the ratio stays 1,000,000,00:1. Maybe it drops to 10:1. Maybe it drops to 1:10. We don’t know. But this argument depends on the ratio being 1:[large], so the conclusion can’t be any stronger than that assumption (which, IMO, is not strong).

  18. BobC says:

    Perhaps I wasn’t abstract enough: Even an Arduino is far more powerful than needed to run any arbitrary simulation: A Turing Machine with “enough” tape can provably do ANY job a supercomputer can do. We’re talking conceptual math here: We’re not trying to envision what a universe simulator looks or works like, just postulating that it may exist.

    The key concept is to decouple time within the simulation from time outside it (in the scope of the system hosting the simulator, rather than the simulated system). The time flow each simulation in a hierarchy perceives would be “real time” for that simulation (1 second per second).

    Of course, I’m implicitly assuming time itself is a shared notion across all simulations (just running at different rates in each). My brain breaks trying to envision timeless or multi-time universes that can run simulations of universes like ours with a single time dimension. That is, if time is an emergent (not fundamental) property of our (simulated) universe, what is it in the universe that’s running our simulation?

  19. Ben Goren says:

    Even an Arduino is far more powerful than needed to run any arbitrary simulation: A Turing Machine with “enough” tape can provably do ANY job a supercomputer can do.

    But that’s just it — your two statements are contradictory. Arduinos (Arduini?Aduinopodes? Arduinoctopudlians?) don’t have anywhere near enough “tape” to do the same jobs that superconductors do. And you could clear out your local office supply store’s stock of thumb drives and still come laughably short of the amount of “tape” you’d need.

    I’m not picking nits — especially in this context. In particular, any universe simulator is going to have to have enough “tape” for the universe as we observe it plus enough left over to do its own housekeeping. That, of necessity, means a computer bigger than the entire observable universe.

    It’s clearly not a theoretical problem, but it is an insurmountable practical problem for the simulation proponents — since it means that either the laws of physics in the universe in which the computer is running are radically different from ours (since, even if you scaled up the amount of “stuff” in our own universe such that you’d naïvely need for the job, your computer would collapse into a black hole long before you could come even remotely close to simulating a tiny corner of it) or else our own (simulated) universe is dramatically simpler than our current best understanding indicates.

    Both directly contradict the fifth premise Sean gives in his summary, leaving that particular argument (and any sufficiently similar ones) in tatters.

    …and, as to your last question…it’s in a similar vein. The entropic arrow of time is pretty well understood, but time at a microscopic level isn’t. But if you’re wondering if time in the universe running our simulation is the same, then you’re again presupposing that the fifth premise is radically incorrect. Even though we don’t know what microscopic time is, we do know that it’s essential to all of physics…and to suggest that it could be fundamentally different (as opposed to simply not synchronized) from time here is to suggest that all of physics is radically different — a suggestion not at all consistent with the fifth premise of the argument.



  20. a new refutation of time says:

    First paragraph:

    What Is an Event? … an outdated notion, a relic of our past classical ontologies.

    Second paragraph:

    One fun part of the conference was a “Science Speed-Dating” event

  21. I don’t think your argument produces a contradiction unless we can say something definitive about the distribution of the simulation hierarchy (and I’m sure we probably can’t). That is, it may be the case that you’re more likely to be in a bottom level simulation than at any other particular level, but that doesn’t necessarily mean you’re more likely to be in a bottom level simulation (a level that can’t simulate) than not (a level that can).

  22. Daniel Kerr says:

    I never understood the allure of arguing the probability of being simulated. If we are indeed in a simulation then with nearly 100% probability we are not the thing being simulated. I would argue that also with nearly 100% probability our experience of the universe as a 4 dimensional spacetime would also not be the construct being simulated. In that case, our experiences are a side effect of the simulation, as if we’re the fluctuation of an electric current in a circuit that has no bearing on the logic gates in the circuit board. At that point you could hardly say we’re in a simulation since we’re not strictly being “computed,” we’re just a feature of the original universe the “simulation” takes part in. A simulation is only a simulation when an observer is there to interpret it as such. Without an observer a simulation is just another part of the universe.

    With regards to the argument I would say premise 1 is false. If I create a program which perfectly simulates every atom in a human brain there’s no reason to believe the semiconductor physics that is eventually interpreted as that brain has the same conscious/observer-based experience I do. I do not believe such a simulation necessarily has an observer and if it does I doubt it’s the human brain being simulated.

    In conjunction with the above the whole argument reduces down to a “minimal resource observer” kind of argument which concludes that in any given universe the most likely observer is the one that requires the least resources in its construction. That seems like a sensible interpretation of the conclusion to me.

  23. Let’s go by the evidence. At present, a large number of online commentators are present at this blog site, twitter, facebook , you tube etc as well as other sites on related or similar topics. The question is do our online/digital interactions affect/influence the way we think which in turn affects the way we behave in real life. So in other words if we allow any sort of interaction mediated by technology to influence us, then our real world behaviour is affected, which includes our online/digital behaviour . If “Simulation is defined as the imitation of the operation of a real-world process or system over time, the act of simulating something first requires that a model be developed; this model represents the key characteristics or behaviors/functions of the selected physical or abstract system or process. The model represents the system itself, whereas the simulation represents the operation of the system over time-from wikipedia”. Then by the above definition we are in the “process of obtaining simulated digital/online interactions” because by allowing online/digital behaviour to influence our real life actions we are creating a model of real life behaviour. The question that arises next, is do human activities in inter-stellar space influence space itself. If entanglement were true then this should be the case.

  24. Bee says:

    Hi Sean,

    You might enjoy the argument I led here. In a nutshell I’m saying that creating simulations of people who create simulations of people who create simulations and so on is a great way to test the fundamental laws of nature because the simulated people must (loosely speaking) live on smaller ‘scales’ than the people simulating them.

    So, if we live in a simulation, our collective task is to probe short-distance physics (by building another simulation). I think all high-energy physicists should approve of this 😉



  25. arch1 says:

    This raises so many questions it’s hard to know where to start-
    1) Do we have any reason to believe the top level universe is finite? If not, can any use of “probably” be justified?
    2) Why would the simulation hierarchy necessarily bottom out? For example, what prevents an indefinitely long chain of simulations from each representing its parent with decent (not full) fidelity?
    3) What assumptions concerning system development considerations (efficient resource utilization, bugs, development lifecycle, project goals, etc.) are implicit in this argument? Are those assumptions plausible and if not how should the argument be modified?