I have often talked about the Many-Worlds or Everett approach to quantum mechanics — here’s an explanatory video, an excerpt from *From Eternity to Here*, and slides from a talk. But I don’t think I’ve ever explained as persuasively as possible why I think it’s the right approach. So that’s what I’m going to try to do here. Although to be honest right off the bat, I’m actually going to tackle a slightly easier problem: explaining why the many-worlds approach is *not completely insane*, and indeed quite natural. The harder part is explaining why it actually *works*, which I’ll get to in another post.

Any discussion of Everettian quantum mechanics (“EQM”) comes with the baggage of pre-conceived notions. People have heard of it before, and have instinctive reactions to it, in a way that they don’t have to (for example) effective field theory. Hell, there is even an app, universe splitter, that lets you create new universes from your iPhone. (Seriously.) So we need to start by separating the silly objections to EQM from the serious worries.

The basic silly objection is that EQM postulates too many universes. In quantum mechanics, we can’t deterministically predict the outcomes of measurements. In EQM, that is dealt with by saying that *every measurement outcome “happens,”* but each in a different “universe” or “world.” Say we think of Schrödinger’s Cat: a sealed box inside of which we have a cat in a quantum superposition of “awake” and “asleep.” (No reason to kill the cat unnecessarily.) Textbook quantum mechanics says that opening the box and observing the cat “collapses the wave function” into one of two possible measurement outcomes, awake or asleep. Everett, by contrast, says that the universe splits in two: in one the cat is awake, and in the other the cat is asleep. Once split, the universes go their own ways, never to interact with each other again.

And to many people, that just seems like too much. Why, this objection goes, would you ever think of inventing a huge — perhaps infinite! — number of different universes, just to describe the simple act of quantum measurement? It might be puzzling, but it’s no reason to lose all anchor to reality.

To see why objections along these lines are wrong-headed, let’s first think about classical mechanics rather than quantum mechanics. And let’s start with one universe: some collection of particles and fields and what have you, in some particular arrangement in space. Classical mechanics describes such a universe as a point in phase space — the collection of all positions and velocities of each particle or field.

What if, for some perverse reason, we wanted to describe two copies of such a universe (perhaps with some tiny difference between them, like an awake cat rather than a sleeping one)? We would have to double the size of phase space — create a mathematical structure that is large enough to describe both universes at once. In classical mechanics, then, it’s quite a bit of work to accommodate extra universes, and you better have a good reason to justify putting in that work. (Inflationary cosmology seems to do it, by implicitly assuming that phase space is already infinitely big.)

That is *not what happens in quantum mechanics*. The capacity for describing multiple universes is *automatically there*. We don’t have to add anything.

The reason why we can state this with such confidence is because of the fundamental reality of quantum mechanics: the existence of superpositions of different possible measurement outcomes. In classical mechanics, we have certain definite possible states, all of which are directly observable. It will be important for what comes later that the system we consider is microscopic, so let’s consider a spinning particle that can have spin-up or spin-down. (It is directly analogous to Schrödinger’s cat: cat=particle, awake=spin-up, asleep=spin-down.) Classically, the possible states are

“spin is up”

or

“spin is down”.

Quantum mechanics says that the state of the particle can be a superposition of both possible measurement outcomes. It’s not that we don’t know whether the spin is up or down; it’s that it’s really in a superposition of both possibilities, at least until we observe it. We can denote such a state like this:

(“spin is up” + “spin is down”).

While classical states are points in phase space, quantum states are “wave functions” that live in something called Hilbert space. Hilbert space is very big — as we will see, it has room for lots of stuff.

To describe measurements, we need to add an observer. It doesn’t need to be a “conscious” observer or anything else that might get Deepak Chopra excited; we just mean a macroscopic measuring apparatus. It could be a living person, but it could just as well be a video camera or even the air in a room. To avoid confusion we’ll just call it the “apparatus.”

In any formulation of quantum mechanics, the apparatus starts in a “ready” state, which is a way of saying “it hasn’t yet looked at the thing it’s going to observe” (*i.e.*, the particle). More specifically, the apparatus is not entangled with the particle; their two states are independent of each other. So the quantum state of the particle+apparatus system starts out like this:

(“spin is up” + “spin is down” ; apparatus says “ready”) (1)

The particle is in a superposition, but the apparatus is not. According to the textbook view, when the apparatus observes the particle, the quantum state collapses onto one of two possibilities:

(“spin is up”; apparatus says “up”)

or

(“spin is down”; apparatus says “down”).

When and how such collapse actually occurs is a bit vague — a huge problem with the textbook approach — but let’s not dig into that right now.

But there is clearly another possibility. If the particle can be in a superposition of two states, then so can the apparatus. So nothing stops us from writing down a state of the form

(spin is up ; apparatus says “up”)

+ (spin is down ; apparatus says “down”). (2)

The plus sign here is crucial. This is not a state representing one alternative or the other, as in the textbook view; it’s a superposition of both possibilities. In this kind of state, the spin of the particle is entangled with the readout of the apparatus.

What would it be like to live in a world with the kind of quantum state we have written in (2)? It might seem a bit unrealistic at first glance; after all, when we observe real-world quantum systems it always *feels like* we see one outcome or the other. We never think that we ourselves are in a superposition of having achieved different measurement outcomes.

This is where the magic of decoherence comes in. (Everett himself actually had a clever argument that didn’t use decoherence explicitly, but we’ll take a more modern view.) I won’t go into the details here, but the basic idea isn’t too difficult. There are more things in the universe than our particle and the measuring apparatus; there is the rest of the Earth, and for that matter everything in outer space. That stuff — group it all together and call it the “environment” — has a quantum state also. We expect the apparatus to quickly become entangled with the environment, if only because photons and air molecules in the environment will keep bumping into the apparatus. As a result, even though a state of this form is in a superposition, the two different pieces (one with the particle spin-up, one with the particle spin-down) will never be able to interfere with each other. Interference (different parts of the wave function canceling each other out) demands a precise alignment of the quantum states, and once we lose information into the environment that becomes impossible. That’s decoherence.

Once our quantum superposition involves macroscopic systems with many degrees of freedom that become entangled with an even-larger environment, the different terms in that superposition proceed to evolve completely independently of each other. *It is as if they have become distinct worlds* — because they have. We wouldn’t think of our pre-measurement state (1) as describing two different worlds; it’s just one world, in which the particle is in a superposition. But (2) has two worlds in it. The difference is that we can imagine undoing the superposition in (1) by carefully manipulating the particle, but in (2) the difference between the two branches has diffused into the environment and is lost there forever.

All of this exposition is building up to the following point: in order to describe a quantum state that includes two non-interacting “worlds” as in (2), *we didn’t have to add anything at all* to our description of the universe, unlike the classical case. All of the ingredients were already there!

Our only assumption was that the apparatus obeys the rules of quantum mechanics just as much as the particle does, which seems to be an extremely mild assumption if we think quantum mechanics is the correct theory of reality. Given that, we know that the particle can be in “spin-up” or “spin-down” states, and we also know that the apparatus can be in “ready” or “measured spin-up” or “measured spin-down” states. And if that’s true, the quantum state has the built-in ability to describe superpositions of non-interacting worlds. Not only did we not need to add anything to make it possible, we had no choice in the matter. **The potential for multiple worlds is always there in the quantum state, whether you like it or not.**

The next question would be, do multiple-world superpositions of the form written in (2) ever actually come into being? And the answer again is: yes, automatically, without any additional assumptions. It’s just the ordinary evolution of a quantum system according to Schrödinger’s equation. Indeed, the fact that a state that looks like (1) evolves into a state that looks like (2) under Schrödinger’s equation is what we mean when we say “this apparatus measures whether the spin is up or down.”

The conclusion, therefore, is that multiple worlds automatically occur in quantum mechanics. They are an inevitable part of the formalism. The only remaining question is: what are you going to do about it? There are three popular strategies on the market: anger, denial, and acceptance.

The “anger” strategy says “I hate the idea of multiple worlds with such a white-hot passion that I will *change the rules of quantum mechanics* in order to avoid them.” And people do this! In the four options listed here, both dynamical-collapse theories and hidden-variable theories are straightforward alterations of the conventional picture of quantum mechanics. In dynamical collapse, we change the evolution equation, by adding some explicitly stochastic probability of collapse. In hidden variables, we keep the Schrödinger equation intact, but add new variables — hidden ones, which we know must be explicitly non-local. Of course there is currently zero empirical evidence for these rather *ad hoc* modifications of the formalism, but hey, you never know.

The “denial” strategy says “The idea of multiple worlds is so profoundly upsetting to me that I will *deny the existence of reality* in order to escape having to think about it.” Advocates of this approach don’t actually put it that way, but I’m being polemical rather than conciliatory in this particular post. And I don’t think it’s an unfair characterization. This is the quantum Bayesianism approach, or more generally “psi-epistemic” approaches. The idea is to simply deny that the quantum state represents anything about reality; it is merely a way of keeping track of the probability of future measurement outcomes. Is the particle spin-up, or spin-down, or both? Neither! There is no particle, there is no spoon, nor is there the state of the particle’s spin; there is only the probability of seeing the spin in different conditions once one performs a measurement. I advocate listening to David Albert’s take at our WSF panel.

The final strategy is acceptance. That is the Everettian approach. The formalism of quantum mechanics, in this view, consists of quantum states as described above and nothing more, which evolve according to the usual Schrödinger equation and nothing more. The formalism predicts that there are many worlds, so we choose to accept that. This means that the part of reality we experience is an indescribably thin slice of the entire picture, but so be it. Our job as scientists is to formulate the best possible description of the world as it is, not to force the world to bend to our pre-conceptions.

Such brave declarations aren’t enough on their own, of course. The fierce austerity of EQM is attractive, but we still need to verify that its predictions map on to our empirical data. This raises questions that live squarely at the physics/philosophy boundary. Why does the quantum state branch into certain kinds of worlds (*e.g.*, ones where cats are awake or ones where cats are asleep) and not others (where cats are in superpositions of both)? Why are the probabilities that we actually observe given by the Born Rule, which states that the probability equals the wave function squared? In what sense are there probabilities *at all*, if the theory is completely deterministic?

These are the *serious* issues for EQM, as opposed to the silly one that “there are just too many universes!” The “why those states?” problem has essentially been solved by the notion of pointer states — quantum states split along lines that are macroscopically robust, which are ultimately delineated by the actual laws of physics (the particles/fields/interactions of the real world). The probability question is trickier, but also (I think) solvable. Decision theory is one attractive approach, and Chip Sebens and I are advocating self-locating uncertainty as a friendly alternative. That’s the subject of a paper we just wrote, which I plan to talk about in a separate post.

There are other silly objections to EQM, of course. The most popular is probably the complaint that it’s not falsifiable. That truly makes no sense. It’s trivial to falsify EQM — just do an experiment that violates the Schrödinger equation or the principle of superposition, which are the only things the theory assumes. Witness a dynamical collapse, or find a hidden variable. Of course we don’t see the other worlds directly, but — in case we haven’t yet driven home the point loudly enough — those other worlds are not added on to the theory. They come out automatically if you believe in quantum mechanics. If you have a physically distinguishable alternative, by all means suggest it — the experimenters would love to hear about it. (And true alternatives, like GRW and Bohmian mechanics, are indeed experimentally distinguishable.)

Sadly, most people who object to EQM do so for the silly reasons, not for the serious ones. But even given the real challenges of the preferred-basis issue and the probability issue, I think EQM is way ahead of any proposed alternative. It takes at face value the minimal conceptual apparatus necessary to account for the world we see, and by doing so it fits all the data we have ever collected. What more do you want from a theory than that?

Is it your view that the many-worlds interpretation has become more popular in, say, the last 15 years?

What do you think of the cosmological interpretation of Aguirre and Tegmark?

S0 my guess is that you would object to the interpretations here (I may be misunderstanding the piece completely)

http://www.wired.com/2014/06/the-new-quantum-reality/

But if I understand it right it’s basically a hidden variables interpretation.

Occam’s Razor makes me agree with the MWI. It exorcises the measurement problem better than other convoluted explanations.

I understand how dynamical collapse is (hypothetically) experimentally distinguishable from Many-Worlds, but I don’t understand how Bohmian mechanics is. Could you explain?

I’m also curious what you think of recent arguments based on the PBR theorem that local psi-epistemic models fail.

I would love to read your thoughts about the Bohmian view, and especially that Wired article. I don’t know what to think about it. As a fluid mechanics guy myself, it’s definitely interesting, but I have to believe there is a better reason that pilot waves aren’t more talked about than what the article implied. My guess? If these pilot waves can affect trajectories of particles (i.e., stuff), then wouldn’t we expect to be able to detect them directly somehow?

So is time quantized and quantumly indeterminant below some level?

Sean,

Since you’re being “polemical rather than conciliatory”, I hope you’ll let me raise one point…

I don’t see the pointer basis problem anywhere near being solved. AFAIK, the main argument in all proposed solutions (that I saw) is that the interactions are local in space, which provides one with a preferred basis (the eigenstates of the interaction Hamiltonian). IOW, the existence of the pointer basis is based on locality of interactions.

But interactions are local only until one quantizes gravity. As soon as you are allowed to construct a superposition of two spacetime manifolds (which is an essential feature of QG), locality of interactions goes out the window.

And once locality is gone, the pointer basis is also gone, and the measurement problem ressurects itself in its original form — there is no preferred way to split the wavefunction into branches.

The issue here is that most of the people who are researching this stuff do so in the non(general)relativistic approximation, i.e. they assume the existence of a flat Minkowski spacetime (or other fixed background), which implicitly gives them a well-defined notion of locality. They simply ignore quantum gravity issues.

So I’d say that the resolution of the pointer basis problem in EQM requires an additional postulate, much like in all other versions of QM. And the EQM folks just seem to be in denial of this problem, believing that they “solved” it, IMO.

Best,

Marko

Sean, is there any particular objection you have to the decoherent histories approach? This approach employs decoherence and the Copenhagen interpretation to provide a single logical framework for quantum and classical physics.

I like the Many-Worlds interpretation for its “economy” of formalism. I.e. You don’t need anything other than the raw, pragmatic equations of quantum mechanics. But decoherent histories seems to be a way to keep this economy of formalism without postulating an ensemble of deterministic realities.

I believe there are many silly interpretations in Quantum Theory, and agree there are many silly arguments for and against one version or another as Sean explains. What I have against EQM in particular is that I believe it is not in accord with Occam’s Razor (it may be the most complicated explanation) for what we observe in the quantum world. Still I applaud all attempt to logically justify any version of Quantum Theory.

Of the choices offered in the Quantum Smackdown, QBism seems like the simplest explanation to me, but simpler than this, IMO, would be a quantum theory involving local hidden variables. Although most think that local hidden variables are not possible based upon Bell’s logical argument, there are contrary arguments.

Today we have such hypothesis as dark matter, dark energy, gravitons, Higgs particles, quantum foam, virtual particles, etc. Any and all of these particulates/ fields or other aether-like entities, can be considered hidden variables within a background field. Any such particulate entities could effect interactions in the quantum world and IMO could enable a much simpler macro-world-like explanation of Quantum Mechanics compatible with classical physics.

Well done Sean. Simple fact, if quantum evolution is unitary and our models in science have more than mere instrumental value , like it or not, you’re stuck with many worlds. My prejudice is that , with regard to the Born rule , the proposal by Aguirre and Tegmark, the so called Cosmological interpretation offers some promise. I think you would call this branch counting. However, I am reading through the paper you wrote too. I look forward to reading your post on this, as well as to why MWs actually works. These posts are very much appreciated by me.

“The potential for multiple worlds is always there in the quantum state, whether you like it or not.”

That needs to be a on a (large) bumper sticker.

Consistent ( Decoherent) Histories is a totally instrumentalist interpretation , the way issues like quantum non local correlations in measurement records are viewed is cringe worthy in my opinion. However the actual mathematical formalism of CH is , again in my opinion, very productive in illuminated the quantum measurement process. I have often wondered why advocates of many worlds haven’t embraced the CH formalism. Rather than being centered on endless world splitting , we might better think of unitary evolution, together with the Decoherence process as producing a large number of consistent histories. We can even quantify these histories because in the CH mathematical formalism the total number of possible consistent histories is equal to the square of the Hilbert Space Dimension. We can further quantify the Hilbert Space Dimension via the Holographic principle. (t’Hooft dimensional reduction)

This raises an interesting issue. As proved by Dowker and Kent, the total number of consistent histories is far greater than the number of consistent histories which look like our classical world. Is this a failure of the consistency criteria or a fascinating fact about the product of the decoherence process? Gell-Mann and Hartle have argued that our sensory neurological capabilities evolved to only model reality in terms of predictive histories, which they argue are the quasi-classical histories we directly experience. Now if true, that’s really interesting.

I know this is a naive question, but: why is this a problem that has to be solved? Why can’t we deal with this the way we deal with lots of other strange things in physics, by saying that our physical intuitions don’t work well at scales very different from the ones they developed for, and leave it at that?

Put another away: You could say, “The formalism of QR says that macroscopic systems behave as if there were many worlds.” Or you could say, “The formalism of QR says that macroscopic systems behave as if there were many worlds —

and there really are” How is the second an improvement over the first? What does the claim that a hypothesis is “true” add to the claim that it is predictively successful, aesthetically satisfying and productive of new insights?“To describe measurements, we need to add an observer. It doesn’t need to be a “conscious” observer or anything else that might get Deepak Chopra excited; we just mean a macroscopic measuring apparatus. It could be a living person, but it could just as well be a video camera or even the air in a room. To avoid confusion we’ll just call it the “apparatus.”

I’ve never understood what this means in practice. Does this imply that “measurements” are taking places billions or trillions of times every second throughout the universe? And have been since the Big Bang? Is any interaction a measuring apparatus? I’m sure I’m missing something obvious. Maybe if I understood what would not be a measurement?

So, what happens to the law of conservation of mass-energy when another universe is spawned by peeking into the box?

A very nice description of the Many-worlds interpretation. I personally view Many-worlds interpretation as a candidate for reality, but I remain unconvinced that we’ve reached the point where we can move it from candidate to settled.

If we falsify the Schrodinger equation or superposition, we’ve falsified Many-worlds, but haven’t we also falsified every other interpretation? To consider Many-worlds falsifiable, don’t we need to be able to, at least in principle, uniquely falsify it?

Occam’s razor seems like a tough call on these interpretations. Is mathematical parsimony the same thing as ontological parsimony? I’m not sure of the answer, but dismissing the concern as silly seems unjustified.

I’m probably being dense, but there’s still one thing I don’t get about MW. I have never liked the Copenhagen interpretation because of its unjustified introduction of the concepts of “collapse” and “measurement”, without ever defining rigorously what a “measurement” is. How does MW solve this issue? The explanation here still contains words like “measurement” and “measuring apparatus” that is “prepared”, and so on. Is a proper understanding of decoherence required of the reader in order to understand that in MW these things are rigorously defined?

On a related note, is a “measurement” and the “creation” of two worlds in the MW sense a discrete event, or is it the case that the states drift apart little by little until they can no longer interfere? For the non-mathematician, does it make sense to think of decoherence as something that emerges from the sum of many interactions, much like pressure and temperature emerge from a gas of many particles, but are meaningless when discussing very few particles?

Pingback: Sean Carroll makes the case for the Many-worlds interpretation of quantum mechanics | SelfAwarePatterns

Sean,

Several remarks.

The first is that classical physics does indeed allow us to describe multiple worlds provided that we interpret classical probabilities according to something like David Lewis’s modal realism. When studying the evolution of classical probability distributions, all the states “are just there” in the formalism, so why not simply accept that they exist in reality, as one does in EQM?

My second remark is about axioms. All logical claims consist of premises, arguments that follow from those premises, and conclusions, and EQM is no different. Proponents often suggest that EQM doesn’t need as many axioms as the traditional interpretations. But the trouble with EQM is that although it seems at first like you don’t need very many axioms, the truth is that you do. Simply insisting that we don’t mess with the superposition rule isn’t enough. Quantum-information instrumentalism (say, QBism) doesn’t mess with superpositions either, and allows arbitrarily large systems to surperpose. Declaring that we must interpret the elements of a superpositions as physical parallel universes is therefore an affirmative, nontrivial axiom about the ontology of the theory, even if some people might regard it as an “obvious” axiom.

The pointer-variable argument also implicitly assumes axioms as well. We have to declare that something singles out a preferred basis (for the cat, this means that we need to single out the alive vs. dead basis, rather than, say, the alive+dead vs. alive-dead basis). You can keep adding additional systems and environments, but at some point you have to declare that once you’ve added enough, you can shout “stop!” and pick your preferred basis. And what is our criterion for picking that basis? That’s going to be another axiom! And if you pick locality or something like that for specifying your preferred-basis-selection postulate, you have to contend with the fact that locality may not be a fundamental feature of reality once we figure out quantum gravity, so if we do add locality as part of our axiom for picking the preferred basis, the EQM interpretation is now sensitive to features of quantum gravity that we don’t know yet.

Finally, are you assuming that there’s some big universal wave function that evolves unitarily? Given all we know about eternal inflation, is this a reliable assumption anymore? Even if you’re willing to accept it, it represents another axiom to add on.

The problem with EQM is that this process of adding axioms keeps going on (your “epistemic separability principle,” for example, is another axiom, and far from an obvious one!), and even then we still have to contend with the serious trouble of trying to make sense of the concept of probability starting from deterministic assumptions, a serious philosophical problem on par with the is-ought problem of Hume.

Dear Professor Carroll,

OK, I’m sure you’re not going to like this but seeing the state of the problem of QM meets GR, and the problem of time, I’ll put it out there anyway.

(People seem to say they want paradigm shifts, but don’t actually seem to like them at first sight).

I think that it makes sense in complicated matters to at least be sure our most basic assumptions are sound and logical. So…

The Many-Worlds formulation of QM, like any formulation has to, I assume, ultimately work with General Relativity, or some modification of it, such that the ‘suggested’ “problem of Time” is resolved.

Thus we might typically assume phenomena such as space-time, and retro-causality etc ultimately need to be accounted for and incorporated into any final resolution of GR and QM.

However, surely, GR, and the concept of space-time very much rests on Minkowski’s interpretation of SR, outlined in the famous quote

“The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality”But if we look at

“ON THE ELECTRODYNAMICS OF MOVING BODIES”, i.e the heart of Relativity,Section 1says…“[we must be…] quite clear as to what we understand by “time.”We have to take into account that all our judgments in which time plays a part are always judgments of simultaneous events. If, for instance, I say, “That train arrives here at 7 o’clock,” I mean something like this: “The pointing of the small hand of my watch to 7 and the arrival of the train are simultaneous events.”

And the paper goes on from there… leading to SR and ultimately GR and the ‘Space-Time’ we assume QM happens in, etc

But surely all that is described in this key section of such an important paper, is the fact that aggregates of matter (be it a train on a track, or a motorised hand on a numbered dial) can exist, and be moving or stationary, and their speeds an/or locations can be compared.

Critically (and this is the bit you’re not going to like, but that may also be very important, imo) the paper simply ‘

declares’ that the motion of the hand, smoothly rotating, in one direction, on a numbered dial, not only shows that ‘a hand can rotate’, but (apparently)also, that a thing called ‘time’ exists in some form, and is ’passing’ smoothly, in one direction.Similarly by the word ‘

simultaneous’, it isimpliedbutnot proven(to any degree at all here) that ‘time’, and ‘different times’ exist.But surely the train, the ‘hand’, and anything else in the universe is always just ‘somewhere’ doing ‘something’. (though it may be doing this something at a

dilated rate).My point being if QM has to work with GR, and GR is built on assumptions made in and about SR, can anyone please actually show where in ‘

ELECTRODYNAMICS’, or anywhere else in Einstein’s work, the existence of time, and/or past, or future is actually proven to any level? Rather than just assumed in, (with respect) a rather weak manner, but built on as if proven.If no one can point to an actual proof of time in this context, then surely SR essentially just shows how, if moving at a significant velocity relative to an observer, the

rateat which a moving oscillator… vibrates, is unexpectedly dilated.And very specifically,

(imo) that a thing called time exists. Andnotnotthat a past of future exist, Andnotthat the rate at which a thing called ‘time’ flows between apastandfutureis dilated. And thus also not that the concept of space-time is valid.If this is the case then surely we have no

actualvalid reason to suspectexists. Instead GR may need only describe how4d ‘space time’exists, and how velocity, gravity, and acceleration, can dilate rates of change, and warp and curve space (e.g. even back on itself)… but all just ‘‘3d’ warped spacenow’ so to speak.If my analysis is completely wrong, could someone here please cut/past a clip showing Einstein’s reasons/proof for assuming the ‘watch’ hand does not just rotate, but also relates to the passage of a thing called time. (or anyone else’s proof, rather than unchecked assumption).

i.e Minkowski’s quote clearly references

“The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength”So can someone please point to the

experimental physicsthat Minkowski claims, proves a watch hand is not just a motorised hand rotating, but also actually shows the existence, and passage of ‘time’.(I’m sure this sounds like a very naive request to yourself, but then it should likewise be very easy to do, my point being every expert, and many theories, seem to insist

Einsteins Relativity proves space-time, and thus proves ‘time‘, but all I see at the start of ELECTRODYNAMICS, re time, is aassumptionthat because thingsmove, time, and a past and future exist).I know you’re a big fan of entropy, but without a proof of time, surely entropy is just the observable fact that the universe

isexpanding… and not a proof that a past or future exist, and thus also not a proof that time, exists or flows between these unproven ‘places’, with or without an ‘arrow‘.Like I say, I assume you’re not going to like this post, but rather than just ignore it or delete it etc, I’d really appreciate it if you could at least actually post a link to the established proof that past/future/time exist.

Many thanks, Sincerely,

Matthew Marsden.

I have read recently about the bouncing oil-droplet experiment which may indicate that reality prefers pilot-wave theory, shouldn’t it be considered seriously now?!

Great article Sean.

I definitely tend to lean towards the MWI as well. I was wondering if anyone had seen the recent binge of great articles on a few other interpretations of QM:

Article on the Bohm Interpretation (which I’m also sympathetic with):

http://www.simonsfoundation.org/quanta/20140624-fluid-tests-hint-at-concrete-quantum-reality/

Sciam blog highlights objective collapse theory:

http://blogs.scientificamerican.com/critical-opalescence/2014/06/26/physicists-think-they-can-solve-the-mysteries-of-quantum-mechanics-cosmology-and-black-holes-in-one-go-guest-post/

Both of the above have the benefit of being realist (with respect to the wavefuction), but I don’t think the objective collapse theories indeterminism gets it as far as MWI or pilot wave theory, both of which are fully deterministic.

Any thoughts from Sean or anyone else?

Philosophically it is an interesting question whether multiple worlds are to be considered real and not just a mathematical construction. Does this place a restriction on proving there is no free will? Outcomes are deterministic (assumption), but no prediction is exact. When weighed across all possible outcomes, even a relatively simple system with only about 60 spins, becomes impossible to predict, which is functionally equivalent to not knowing the future, and therefore provides a proof that free will exists for anyone who cares enough about predicting the future, i.e., sentient beings.

Hi Bob,

What is it about the consistent histories description of correlation that you find cringe-worthy? In the CH interpretation, all correlations arise from local interactions, and do not imply any non-local interactions.

Do you have a link to the studies which show there are more consistent histories than consistent histories that look classical? I was under the impression that, because the probability of a history is derived from the projection operators used to construct the history, all non-classical histories would have zero probability.

Here’s the thing: If you assume that QM is correct, then it may be true that the MWI is the “best/simplest” explanation.

But you have to assume that we really understand what is going on in QM, and I think it’s clear we do not. It seems much more likely from my perspective that we are missing something fundamental.

All this talk of Many Worlds, while it could be true, are ideas gone too far, assuming too much, and will be just plain unnecessary when we finally do understand what’s actually going on in QM.

Professor Carroll, I like your argument that MWI is the interpretation of QM with the least baggage, but I believe the ensemble interpretation has even less baggage as it doesn’t assume the wavefunction ontologically represents any individual system, but instead the abstraction of that system. This seems more in-line with mathematics as a description and not fundamental in the universe, the latter position I feel carries more baggage. What are your reasons for preferring the Everrett many worlds interpretation to the ensemble interpretation?

I figure I’ll add this: (I did some searching and apparently Roger Penrose already made almost the exact same argument as I just did in my last post).

“You want a physical theory that describes the world that we see around us. … Many worlds quantum mechanics doesn’t do that. Either you accept it and try to make sense of it, which is what a lot of people do, or, like me, you say no—that’s beyond the limits of what quantum mechanics can tell us. Which is, surprisingly, a very uncommon position to take. My own view is that quantum mechanics is not exactly right, and I think there’s a lot of evidence for that.”

Daniel,

An interesting paper on the ensemble interpretation.

http://arxiv.org/abs/1308.5290

So I speak from total ignorance of the actual meaning Copenhagen Interpretation but it seems there is at least one question I have never seen discussed. Suppose *two* observers. Assume an observation is the receipt of a photon from the superposition in question. Assume (of course) the photons land on the retinas of the observers in question at exactly the same time (and put them in the same relativistic frame of reference to get that out of the way). So:

* We are done. In which case which observation ‘wins’.

* We are not done. Some Na/K channeling starts happening in each observer’s brain. At some point the brains make on ‘observation’ .. I’ve no idea what that means but I can certainly assert (in this case) that it happens at the same time. Now, what happens to the collapsing wave function …. do both brains agree on the same outcome ? Is this a 50/50 thing … if so why (and where can I find a statement of this tucked inside Schrodinger’s equation …)

Colour me confused …

Whether you bring in a conscious observer or replace him/her by a machine, the very idea of an experiment, which depends ultimately on an arbitrary human judgment (whether to do this experiment today or not) resulting in split universes, is metaphysical at best. It is amusing that this idea comes from Sean who does not believe in religion or metaphysics. Why not be honest and say like Feynman that we do not understand quantum mechanics? Period! This is nothing but a copout. It should not be called an explanation. I would rather believe in multiuniverse coming out of chaotic inflation than arbitrary number of universes brought out by human experimenters on an intrinsically probabilistic natural system.

No physical theory has proven itself to be correct – to think that QM is any different is perhaps foolish.

Compare with Newton’s Gravity where the theory is very useful and can still be used for predictions of satellite orbits, whereas its conceptually completely wrong with its action at a distance and other problems.

Is MWT like satellite orbit prediction? In other words can it still be true if QM is wrong? For many of the alternative theories, where collapse is a real phenomenon (connected to a non linearity, etc) the answer is no, MWT would die along with QM in this case.

Are there _any_ ways to replace QM with a deeper theory so that the MWT could remain?

Contrast the MWT with the distribution in results of an experiment – features like this would obviously survive any new theory, as the experimental record is clear.

Its clear that the ‘phase space’ of replacement theories for QM that allow MWT has to much smaller than those that merely predict experimental outcomes.

Where are all the other universes (especially the ones in which women can’t keep their hands off me)? Are they outside our universe and hence are inaccessible to us?

Are new universes being created all the time or were they all created at the time of our Big Bang?

Thanks

Sean, after producing an excellent recent paper on fine tuning in the early universe, why have you endorsed the silliest theory ever created by the mind of man (the Everett multiple worlds theory, which is more properly called the “infinite duplication theory” or the “infinite excess baggage theory”)? I find it hilarious that putative rationalists such as you do debates arguing against life after death (on the basis of parsimony), and then you go and endorse a theory which is the worst violation of Occam’s Razor in the history of human thinking. You’ve jumped the shark, Sean. Which is a pity, because I was citing some of your recent work approvingly. No person who endorses the Everett multiple worlds theory has any business pretending to be more rational than the flakiest astrologer. Please explain at your next debate on the existence of God or the afterlife that you believe in something 1,000,000,000,000,000,000,000,000,000,000 times more extravagant (and vastly less verifiable) than either hypothesis.

http://www.futureandcosmos.blogspot.com/2013/08/you-are-only-you-no-evidence-for.html

My sincere best wishes, and please disassociate yourself from this mental illness that is the Everett theory. Is it any wonder why people reject science when physicists are endorsing this type of nonsense?

Sean, I am happy to believe many-worlds if all that is at stake is the definition of the word “exist” , which is likely ambiguous in this context anyways. But I am uncomfortable with what seems to me a description too closely tied to single-particle QM. For example, most descriptions I see talk about a split between finite number of options following a measurement arbitrarily localized in space and time. Is there a way to tell the story of universes “splitting” for measurements of operators with a continuous spectrum, like most interesting operators in quantum field theory? Do we get continuous infinity of universes, and if so is there some regularized version and some sense of cut-off independence?

Alanl,

I don’t think your description was complete enough for an answer, so I will assume the following.

A photon is on its way through an apparatus such that it will either strike the retina of one observer or the other (with a nonzero probability for either outcome). It’s wavefunction can be expanded in the following complete basis

{|photon strikes oberver A> , |photon strikes observer B>}

According to the Copenhagen interpretation, the wavefunction of the photon is a tool that encodes the probability of the photon hitting either observer. Both observers will agree on the outcome of the experiment. The “collapse” of the wavefunction is not physical. It is instead an “update” as the observers measure the photon.

According to the Many-World interpretation, there is a universe where the photon strikes observer A, and there is a universe where the photon strikes observer B. The wavefunction does not merely represent a recipe for calculating probabilities. It instead is a more direct description of a reality that includes these two universes.

People do not like the copenhagen interpretation because it does not include a description of the “actualization” of an experiment. Instead, quantum mechanics is seen only as “a matter of relations between phenomena and observations of their frequencies”[1]. A theory that gives a priori probabilities of experimental outcomes, but does not describe how all but one fade to zero.

I would not strictly agree with Sean’s claim that the MW interpretation is the most economical interpretation. I would instead say it is the most economical interpretation that also includes an ontology for actualisation (namely, all possibilities are real, and persist). But I am not convinced quantum mechanics owes us such an ontology. Furthermore, I am not sure it is wise to infer an ontology from our methods of calculation in a regime that is clearly alien to our classical sensibilities. The MWI offers a description of reality independent of experimental phenomena – an ensemble of deterministically evolving possibilites, each of which are as real as the other – but we have no reason to insist such a description of reality must exist.

[1] http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.64.339

Dear Sean, I don’t see how postulating a vast number of (hidden) worlds, with additional postulations on how the worlds split, is simpler and better than just postulating a single pilot wave as Bohm suggested.

Shodan says:

June 30, 2014 at 4:43 pm

Hi Bob,

What is it about the consistent histories description of correlation that you find cringe-worthy? In the CH interpretation, all correlations arise from local interactions, and do not imply any non-local interactions.

——————————————————————————

This I find cringe worthy;

Is quantum mechanics nonlocal?

This depends on what one means by “nonlocal.” Two separated quantum systems A and B can be in an entangled state that lacks any classical analog. However, it is better to think of this as a nonclassical rather than as a nonlocal state, since doing something to system A cannot have any influence on system B as long as the two are sufficiently far apart. In particular, quantum theory gives no support to the notion that the world is infested by mysterious long-range influences that propagate faster thaan the speed of light. Claims to the contrary are based upon an inconsistent or inadequate formulations of quantum principles, typically with reference to measurements. (Also see measurements, Einstein-Podolsky-Rosen.)

http://quantum.phys.cmu.edu/CHS/quest.html#nonlocal

_________________________________

We all know you can’t use the measurement correlations of entangled particles to send information faster than the speed light, but that doesn’t mean there isn’t a non local process involved. That’s what Bell’s theorem proved by the failure to get the predicted inequality in the measurement records. The above just wishes the problem away.

Shodan, that was a very good recommendation, it was a nice summary of the ensemble interpretation. Reading that and I think you’d get a good sense of why I prefer it over many worlds and why I think it’s the more philosophically “simple” interpretation.

I agree with you in your separate comment that I would not assign the wavefunction any fundamental or ontological meaning nor do I believe quantum mechanics on its own implies that we should.

I would also like to add as a separate point that many worlds does not imply that a branching mechanism occurs nor that one is necessary. There are simply many universes of potential outcomes, some that share history up until one such branching point. The relative number (technically measure, not number) of universes with the same outcome for a given event (assuming shared history up until that point) is equal to the probability of an observer being in that particular universe.

Shodan writes

Do you have a link to the studies which show there are more consistent histories than consistent histories that look classical? I was under the impression that, because the probability of a history is derived from the projection operators used to construct the history, all non-classical histories would have zero probability

______________________________

Yes here’s the reference

Properties of Consistent Histories

Fay Dowker, Adrian Kent

(Submitted on 17 Sep 1994 (v1), last revised 30 Jan 1996 (this version, v4))

Here is a response to this.

Equivalent Sets of Histories and Multiple Quasiclassical Realms

Murray Gell-Mann, James B. Hartle (Santa Fe Institute, Los Alamos, and University of New Mexico)

(Submitted on 8 Apr 1994 (v1), last revised 5 May 1996 (this version, v3))

Bob,

I would emphatically object to the insistence that the intepretation involves some ambiguous non-local “process”. The CH interpretation explicitly rejects the attachment of any physical process to the reduction of the wavefunction. Bell’s theorem did not prove the existence of non-local processes. Instead, it proved that any realist interpretation of QM must involve non-local processes. (See section 8 of this paper: http://arxiv.org/pdf/1308.5290v2.pdf )

Daniel,

I completely agree. The “branching” of worlds is effectively an increase in hamming distance that occurs via decoherence, a unitary process. No new mechanism is postulated.

Shodan says:

June 30, 2014 at 10:17 pm

Bob,

I would emphatically object to the insistence that the interpretation involves some ambiguous non-local “process”. The CH interpretation explicitly rejects the attachment of any physical process to the reduction of the wavefunction. Bell’s theorem did not prove the existence of non-local processes. Instead, it proved that any realist interpretation of QM must involve non-local processes. (See section 8 of this paper: http://arxiv.org/pdf/1308.5290v2.pdf )

____________________________

I think logic compels us to view the failure to get the inequality devised by Bell in the correlated measurement record for the entangled particles to accept the fact that a measurement of one particle affects the measurement result of the other instantly. The whole premise of Bell’s inequality is based on the absence of non local influences. Of course since these measurement events are space like separated which particle effects the other is frame dependent, but I don’t think this provides an escape from accepting a non local effect at work in QM.

Bob,

I wouldn’t take action at a distance to be the necessary conclusion of bell’s inequality, as such a violation of the inequality can be derived from experiments which take place only in unitary representations of SU(2), which as a group contains no intrinsic concept of locality.

Pingback: Measuring the ‘reality’ | The Great Vindications

Bob,

I would instead say the logic compels us to accept either-or. If we reject a realist interpretation of QM, then any obligation to suppose non-local interactions leaves with it.

Bell’s theorem is

“No physical theory of local hidden variables can ever produce all of the predictions of QM”[1]

which is very different from

“No local theory can ever produce all of the predictions of QM”

In fact, I have previously come across a Nature paper claiming the choice is even more restricted, and non-local realism must be rejected[2] (Though I have not thoroughly read it yet.)

[1]http://en.wikipedia.org/wiki/Bell%27s_theorem

[2]http://arxiv.org/abs/0704.2529

Shodan,

sorry I wasn’t too clear — I guess thatis the nature of this beast, but you say:

“I don’t think your description was complete enough for an answer, so I will assume the following.

A photon is on its way through an apparatus such that it will either strike the retina of one observer or the other (with a nonzero probability for either outcome). ”

My point (question, whatever) is that it seems to me that all Copenahagen Interpretations begin this way — ONE observer, one photon. But our putative S.Cat reflects many photons when the box is open, so certainly there must be cases where two photons (which somehow carry the notion of an observation) hit whatever they have to hit in two different observers (in the same frame of reference) to become an ‘observation’ at exactly the same time – albeit in two different observers. So, do they agree on the outcome ? — it would seem so in the standard interpretation, but I cannot fathom why this should be so (50/50 independently for each observer, surely). If one observer got the photon even slightly before the other, then I guess he takes the other’s superposition down with him, but in a spot-on-dead-even-tie (don’t know what that means, but anyway) either:

* They both agree (and then what happened to 50/50 independently ?), or

* They both have their own interpretations of the result, and in the 50% of the cases where they disagree the universe forks

I’m not sure I like either of these ..

The reason why I don’t believe in many worlds interpretation is that I don’t see it around me. The universe I observe is the universe where all quantum effects cancel out nicely. If that interpretation is true, there’s countless universes where 3D cinemas went bankrupt because polarizing glasses stopped working. Because photoreceptor cells in my eyes are measuring all those photons coming through them one after another. And despite the sheer size of that ‘true path’ universe set where things go on fine, I just refuse to accept the idea that somewhere out there is a copy of me unable to watch 3D movies because the left 3D glasses filter stays mostly dark all the time for no apparent reason, just to make our universe complete.

Dear Sean,

Thank you for your clear article on EQM. There are some things on which I would really like some more exposition:

“We wouldn’t think of our pre-measurement state (1) as describing two different worlds; it’s just one world, in which the particle is in a superposition. But (2) has two worlds in it. The difference is that we can imagine undoing the superposition in (1) by carefully manipulating the particle, but in (2) the difference between the two branches has diffused into the environment and is lost there forever.”

“The difference is that we can imagine undoing the superposition in (1) by carefully manipulating the particle, but in (2) the difference between the two branches has diffused into the environment and is lost there forever.”

There seems to be some additional postulate (physical or at least philosophical) to the quantum formalism that you introduce here. In principle, if the superposition is diffused into the environment then, due to the unitary nature of quantum mechanics, the inverse time evolution should always be possible. The off-diagonal terms in the density matrix will never be truly zero, just for all practical purposes (decoherence). (Similar to discussions of irreversibility in statistical mechanics) In other words, according to quantum mechanics two universes should be able to sometimes “fuse” back into a superposition. Furthermore, there seems to be no rigorous point that one can say a new branch universe has been created if mathematically one can only say that on a

locallevel it seems as though the two possibilities of a subsystem of our universe become increasingly less correlated.To me it seems that at some point when superpositions become very non-local and extended throughout the environment, you suddenly postulate different universes! I think this does not make sense. You say that because of decoherence they become independent, however, this is for all practical purposes and only on the local level it seems this way. Surely you must agree that the complete superposition is still

out therein this single universe?Yours sincerely,Jasper

My apologies for the problems with emphasis in the reply above.

A question which really puzzles me about the Everett interpretation: can probabilities of superposed states be irrational, for example 1/pi? If so how will the universe split into distinct number of sub universes? Perhaps my premise is incorrect, not sure, any comments much appreciated. Thanks.