I have often talked about the Many-Worlds or Everett approach to quantum mechanics — here’s an explanatory video, an excerpt from *From Eternity to Here*, and slides from a talk. But I don’t think I’ve ever explained as persuasively as possible why I think it’s the right approach. So that’s what I’m going to try to do here. Although to be honest right off the bat, I’m actually going to tackle a slightly easier problem: explaining why the many-worlds approach is *not completely insane*, and indeed quite natural. The harder part is explaining why it actually *works*, which I’ll get to in another post.

Any discussion of Everettian quantum mechanics (“EQM”) comes with the baggage of pre-conceived notions. People have heard of it before, and have instinctive reactions to it, in a way that they don’t have to (for example) effective field theory. Hell, there is even an app, universe splitter, that lets you create new universes from your iPhone. (Seriously.) So we need to start by separating the silly objections to EQM from the serious worries.

The basic silly objection is that EQM postulates too many universes. In quantum mechanics, we can’t deterministically predict the outcomes of measurements. In EQM, that is dealt with by saying that *every measurement outcome “happens,”* but each in a different “universe” or “world.” Say we think of Schrödinger’s Cat: a sealed box inside of which we have a cat in a quantum superposition of “awake” and “asleep.” (No reason to kill the cat unnecessarily.) Textbook quantum mechanics says that opening the box and observing the cat “collapses the wave function” into one of two possible measurement outcomes, awake or asleep. Everett, by contrast, says that the universe splits in two: in one the cat is awake, and in the other the cat is asleep. Once split, the universes go their own ways, never to interact with each other again.

And to many people, that just seems like too much. Why, this objection goes, would you ever think of inventing a huge — perhaps infinite! — number of different universes, just to describe the simple act of quantum measurement? It might be puzzling, but it’s no reason to lose all anchor to reality.

To see why objections along these lines are wrong-headed, let’s first think about classical mechanics rather than quantum mechanics. And let’s start with one universe: some collection of particles and fields and what have you, in some particular arrangement in space. Classical mechanics describes such a universe as a point in phase space — the collection of all positions and velocities of each particle or field.

What if, for some perverse reason, we wanted to describe two copies of such a universe (perhaps with some tiny difference between them, like an awake cat rather than a sleeping one)? We would have to double the size of phase space — create a mathematical structure that is large enough to describe both universes at once. In classical mechanics, then, it’s quite a bit of work to accommodate extra universes, and you better have a good reason to justify putting in that work. (Inflationary cosmology seems to do it, by implicitly assuming that phase space is already infinitely big.)

That is *not what happens in quantum mechanics*. The capacity for describing multiple universes is *automatically there*. We don’t have to add anything.

The reason why we can state this with such confidence is because of the fundamental reality of quantum mechanics: the existence of superpositions of different possible measurement outcomes. In classical mechanics, we have certain definite possible states, all of which are directly observable. It will be important for what comes later that the system we consider is microscopic, so let’s consider a spinning particle that can have spin-up or spin-down. (It is directly analogous to Schrödinger’s cat: cat=particle, awake=spin-up, asleep=spin-down.) Classically, the possible states are

“spin is up”

or

“spin is down”.

Quantum mechanics says that the state of the particle can be a superposition of both possible measurement outcomes. It’s not that we don’t know whether the spin is up or down; it’s that it’s really in a superposition of both possibilities, at least until we observe it. We can denote such a state like this:

(“spin is up” + “spin is down”).

While classical states are points in phase space, quantum states are “wave functions” that live in something called Hilbert space. Hilbert space is very big — as we will see, it has room for lots of stuff.

To describe measurements, we need to add an observer. It doesn’t need to be a “conscious” observer or anything else that might get Deepak Chopra excited; we just mean a macroscopic measuring apparatus. It could be a living person, but it could just as well be a video camera or even the air in a room. To avoid confusion we’ll just call it the “apparatus.”

In any formulation of quantum mechanics, the apparatus starts in a “ready” state, which is a way of saying “it hasn’t yet looked at the thing it’s going to observe” (*i.e.*, the particle). More specifically, the apparatus is not entangled with the particle; their two states are independent of each other. So the quantum state of the particle+apparatus system starts out like this:

(“spin is up” + “spin is down” ; apparatus says “ready”) (1)

The particle is in a superposition, but the apparatus is not. According to the textbook view, when the apparatus observes the particle, the quantum state collapses onto one of two possibilities:

(“spin is up”; apparatus says “up”)

or

(“spin is down”; apparatus says “down”).

When and how such collapse actually occurs is a bit vague — a huge problem with the textbook approach — but let’s not dig into that right now.

But there is clearly another possibility. If the particle can be in a superposition of two states, then so can the apparatus. So nothing stops us from writing down a state of the form

(spin is up ; apparatus says “up”)

+ (spin is down ; apparatus says “down”). (2)

The plus sign here is crucial. This is not a state representing one alternative or the other, as in the textbook view; it’s a superposition of both possibilities. In this kind of state, the spin of the particle is entangled with the readout of the apparatus.

What would it be like to live in a world with the kind of quantum state we have written in (2)? It might seem a bit unrealistic at first glance; after all, when we observe real-world quantum systems it always *feels like* we see one outcome or the other. We never think that we ourselves are in a superposition of having achieved different measurement outcomes.

This is where the magic of decoherence comes in. (Everett himself actually had a clever argument that didn’t use decoherence explicitly, but we’ll take a more modern view.) I won’t go into the details here, but the basic idea isn’t too difficult. There are more things in the universe than our particle and the measuring apparatus; there is the rest of the Earth, and for that matter everything in outer space. That stuff — group it all together and call it the “environment” — has a quantum state also. We expect the apparatus to quickly become entangled with the environment, if only because photons and air molecules in the environment will keep bumping into the apparatus. As a result, even though a state of this form is in a superposition, the two different pieces (one with the particle spin-up, one with the particle spin-down) will never be able to interfere with each other. Interference (different parts of the wave function canceling each other out) demands a precise alignment of the quantum states, and once we lose information into the environment that becomes impossible. That’s decoherence.

Once our quantum superposition involves macroscopic systems with many degrees of freedom that become entangled with an even-larger environment, the different terms in that superposition proceed to evolve completely independently of each other. *It is as if they have become distinct worlds* — because they have. We wouldn’t think of our pre-measurement state (1) as describing two different worlds; it’s just one world, in which the particle is in a superposition. But (2) has two worlds in it. The difference is that we can imagine undoing the superposition in (1) by carefully manipulating the particle, but in (2) the difference between the two branches has diffused into the environment and is lost there forever.

All of this exposition is building up to the following point: in order to describe a quantum state that includes two non-interacting “worlds” as in (2), *we didn’t have to add anything at all* to our description of the universe, unlike the classical case. All of the ingredients were already there!

Our only assumption was that the apparatus obeys the rules of quantum mechanics just as much as the particle does, which seems to be an extremely mild assumption if we think quantum mechanics is the correct theory of reality. Given that, we know that the particle can be in “spin-up” or “spin-down” states, and we also know that the apparatus can be in “ready” or “measured spin-up” or “measured spin-down” states. And if that’s true, the quantum state has the built-in ability to describe superpositions of non-interacting worlds. Not only did we not need to add anything to make it possible, we had no choice in the matter. **The potential for multiple worlds is always there in the quantum state, whether you like it or not.**

The next question would be, do multiple-world superpositions of the form written in (2) ever actually come into being? And the answer again is: yes, automatically, without any additional assumptions. It’s just the ordinary evolution of a quantum system according to Schrödinger’s equation. Indeed, the fact that a state that looks like (1) evolves into a state that looks like (2) under Schrödinger’s equation is what we mean when we say “this apparatus measures whether the spin is up or down.”

The conclusion, therefore, is that multiple worlds automatically occur in quantum mechanics. They are an inevitable part of the formalism. The only remaining question is: what are you going to do about it? There are three popular strategies on the market: anger, denial, and acceptance.

The “anger” strategy says “I hate the idea of multiple worlds with such a white-hot passion that I will *change the rules of quantum mechanics* in order to avoid them.” And people do this! In the four options listed here, both dynamical-collapse theories and hidden-variable theories are straightforward alterations of the conventional picture of quantum mechanics. In dynamical collapse, we change the evolution equation, by adding some explicitly stochastic probability of collapse. In hidden variables, we keep the Schrödinger equation intact, but add new variables — hidden ones, which we know must be explicitly non-local. Of course there is currently zero empirical evidence for these rather *ad hoc* modifications of the formalism, but hey, you never know.

The “denial” strategy says “The idea of multiple worlds is so profoundly upsetting to me that I will *deny the existence of reality* in order to escape having to think about it.” Advocates of this approach don’t actually put it that way, but I’m being polemical rather than conciliatory in this particular post. And I don’t think it’s an unfair characterization. This is the quantum Bayesianism approach, or more generally “psi-epistemic” approaches. The idea is to simply deny that the quantum state represents anything about reality; it is merely a way of keeping track of the probability of future measurement outcomes. Is the particle spin-up, or spin-down, or both? Neither! There is no particle, there is no spoon, nor is there the state of the particle’s spin; there is only the probability of seeing the spin in different conditions once one performs a measurement. I advocate listening to David Albert’s take at our WSF panel.

The final strategy is acceptance. That is the Everettian approach. The formalism of quantum mechanics, in this view, consists of quantum states as described above and nothing more, which evolve according to the usual Schrödinger equation and nothing more. The formalism predicts that there are many worlds, so we choose to accept that. This means that the part of reality we experience is an indescribably thin slice of the entire picture, but so be it. Our job as scientists is to formulate the best possible description of the world as it is, not to force the world to bend to our pre-conceptions.

Such brave declarations aren’t enough on their own, of course. The fierce austerity of EQM is attractive, but we still need to verify that its predictions map on to our empirical data. This raises questions that live squarely at the physics/philosophy boundary. Why does the quantum state branch into certain kinds of worlds (*e.g.*, ones where cats are awake or ones where cats are asleep) and not others (where cats are in superpositions of both)? Why are the probabilities that we actually observe given by the Born Rule, which states that the probability equals the wave function squared? In what sense are there probabilities *at all*, if the theory is completely deterministic?

These are the *serious* issues for EQM, as opposed to the silly one that “there are just too many universes!” The “why those states?” problem has essentially been solved by the notion of pointer states — quantum states split along lines that are macroscopically robust, which are ultimately delineated by the actual laws of physics (the particles/fields/interactions of the real world). The probability question is trickier, but also (I think) solvable. Decision theory is one attractive approach, and Chip Sebens and I are advocating self-locating uncertainty as a friendly alternative. That’s the subject of a paper we just wrote, which I plan to talk about in a separate post.

There are other silly objections to EQM, of course. The most popular is probably the complaint that it’s not falsifiable. That truly makes no sense. It’s trivial to falsify EQM — just do an experiment that violates the Schrödinger equation or the principle of superposition, which are the only things the theory assumes. Witness a dynamical collapse, or find a hidden variable. Of course we don’t see the other worlds directly, but — in case we haven’t yet driven home the point loudly enough — those other worlds are not added on to the theory. They come out automatically if you believe in quantum mechanics. If you have a physically distinguishable alternative, by all means suggest it — the experimenters would love to hear about it. (And true alternatives, like GRW and Bohmian mechanics, are indeed experimentally distinguishable.)

Sadly, most people who object to EQM do so for the silly reasons, not for the serious ones. But even given the real challenges of the preferred-basis issue and the probability issue, I think EQM is way ahead of any proposed alternative. It takes at face value the minimal conceptual apparatus necessary to account for the world we see, and by doing so it fits all the data we have ever collected. What more do you want from a theory than that?

Is it your view that the many-worlds interpretation has become more popular in, say, the last 15 years?

What do you think of the cosmological interpretation of Aguirre and Tegmark?

S0 my guess is that you would object to the interpretations here (I may be misunderstanding the piece completely)

http://www.wired.com/2014/06/the-new-quantum-reality/

But if I understand it right it’s basically a hidden variables interpretation.

Occam’s Razor makes me agree with the MWI. It exorcises the measurement problem better than other convoluted explanations.

I understand how dynamical collapse is (hypothetically) experimentally distinguishable from Many-Worlds, but I don’t understand how Bohmian mechanics is. Could you explain?

I’m also curious what you think of recent arguments based on the PBR theorem that local psi-epistemic models fail.

I would love to read your thoughts about the Bohmian view, and especially that Wired article. I don’t know what to think about it. As a fluid mechanics guy myself, it’s definitely interesting, but I have to believe there is a better reason that pilot waves aren’t more talked about than what the article implied. My guess? If these pilot waves can affect trajectories of particles (i.e., stuff), then wouldn’t we expect to be able to detect them directly somehow?

So is time quantized and quantumly indeterminant below some level?

Sean,

Since you’re being “polemical rather than conciliatory”, I hope you’ll let me raise one point…

I don’t see the pointer basis problem anywhere near being solved. AFAIK, the main argument in all proposed solutions (that I saw) is that the interactions are local in space, which provides one with a preferred basis (the eigenstates of the interaction Hamiltonian). IOW, the existence of the pointer basis is based on locality of interactions.

But interactions are local only until one quantizes gravity. As soon as you are allowed to construct a superposition of two spacetime manifolds (which is an essential feature of QG), locality of interactions goes out the window.

And once locality is gone, the pointer basis is also gone, and the measurement problem ressurects itself in its original form — there is no preferred way to split the wavefunction into branches.

The issue here is that most of the people who are researching this stuff do so in the non(general)relativistic approximation, i.e. they assume the existence of a flat Minkowski spacetime (or other fixed background), which implicitly gives them a well-defined notion of locality. They simply ignore quantum gravity issues.

So I’d say that the resolution of the pointer basis problem in EQM requires an additional postulate, much like in all other versions of QM. And the EQM folks just seem to be in denial of this problem, believing that they “solved” it, IMO.

Best,

Marko

Sean, is there any particular objection you have to the decoherent histories approach? This approach employs decoherence and the Copenhagen interpretation to provide a single logical framework for quantum and classical physics.

I like the Many-Worlds interpretation for its “economy” of formalism. I.e. You don’t need anything other than the raw, pragmatic equations of quantum mechanics. But decoherent histories seems to be a way to keep this economy of formalism without postulating an ensemble of deterministic realities.

I believe there are many silly interpretations in Quantum Theory, and agree there are many silly arguments for and against one version or another as Sean explains. What I have against EQM in particular is that I believe it is not in accord with Occam’s Razor (it may be the most complicated explanation) for what we observe in the quantum world. Still I applaud all attempt to logically justify any version of Quantum Theory.

Of the choices offered in the Quantum Smackdown, QBism seems like the simplest explanation to me, but simpler than this, IMO, would be a quantum theory involving local hidden variables. Although most think that local hidden variables are not possible based upon Bell’s logical argument, there are contrary arguments.

Today we have such hypothesis as dark matter, dark energy, gravitons, Higgs particles, quantum foam, virtual particles, etc. Any and all of these particulates/ fields or other aether-like entities, can be considered hidden variables within a background field. Any such particulate entities could effect interactions in the quantum world and IMO could enable a much simpler macro-world-like explanation of Quantum Mechanics compatible with classical physics.

Well done Sean. Simple fact, if quantum evolution is unitary and our models in science have more than mere instrumental value , like it or not, you’re stuck with many worlds. My prejudice is that , with regard to the Born rule , the proposal by Aguirre and Tegmark, the so called Cosmological interpretation offers some promise. I think you would call this branch counting. However, I am reading through the paper you wrote too. I look forward to reading your post on this, as well as to why MWs actually works. These posts are very much appreciated by me.

“The potential for multiple worlds is always there in the quantum state, whether you like it or not.”

That needs to be a on a (large) bumper sticker.

Consistent ( Decoherent) Histories is a totally instrumentalist interpretation , the way issues like quantum non local correlations in measurement records are viewed is cringe worthy in my opinion. However the actual mathematical formalism of CH is , again in my opinion, very productive in illuminated the quantum measurement process. I have often wondered why advocates of many worlds haven’t embraced the CH formalism. Rather than being centered on endless world splitting , we might better think of unitary evolution, together with the Decoherence process as producing a large number of consistent histories. We can even quantify these histories because in the CH mathematical formalism the total number of possible consistent histories is equal to the square of the Hilbert Space Dimension. We can further quantify the Hilbert Space Dimension via the Holographic principle. (t’Hooft dimensional reduction)

This raises an interesting issue. As proved by Dowker and Kent, the total number of consistent histories is far greater than the number of consistent histories which look like our classical world. Is this a failure of the consistency criteria or a fascinating fact about the product of the decoherence process? Gell-Mann and Hartle have argued that our sensory neurological capabilities evolved to only model reality in terms of predictive histories, which they argue are the quasi-classical histories we directly experience. Now if true, that’s really interesting.

I know this is a naive question, but: why is this a problem that has to be solved? Why can’t we deal with this the way we deal with lots of other strange things in physics, by saying that our physical intuitions don’t work well at scales very different from the ones they developed for, and leave it at that?

Put another away: You could say, “The formalism of QR says that macroscopic systems behave as if there were many worlds.” Or you could say, “The formalism of QR says that macroscopic systems behave as if there were many worlds —

and there really are” How is the second an improvement over the first? What does the claim that a hypothesis is “true” add to the claim that it is predictively successful, aesthetically satisfying and productive of new insights?“To describe measurements, we need to add an observer. It doesn’t need to be a “conscious” observer or anything else that might get Deepak Chopra excited; we just mean a macroscopic measuring apparatus. It could be a living person, but it could just as well be a video camera or even the air in a room. To avoid confusion we’ll just call it the “apparatus.”

I’ve never understood what this means in practice. Does this imply that “measurements” are taking places billions or trillions of times every second throughout the universe? And have been since the Big Bang? Is any interaction a measuring apparatus? I’m sure I’m missing something obvious. Maybe if I understood what would not be a measurement?

So, what happens to the law of conservation of mass-energy when another universe is spawned by peeking into the box?

A very nice description of the Many-worlds interpretation. I personally view Many-worlds interpretation as a candidate for reality, but I remain unconvinced that we’ve reached the point where we can move it from candidate to settled.

If we falsify the Schrodinger equation or superposition, we’ve falsified Many-worlds, but haven’t we also falsified every other interpretation? To consider Many-worlds falsifiable, don’t we need to be able to, at least in principle, uniquely falsify it?

Occam’s razor seems like a tough call on these interpretations. Is mathematical parsimony the same thing as ontological parsimony? I’m not sure of the answer, but dismissing the concern as silly seems unjustified.

I’m probably being dense, but there’s still one thing I don’t get about MW. I have never liked the Copenhagen interpretation because of its unjustified introduction of the concepts of “collapse” and “measurement”, without ever defining rigorously what a “measurement” is. How does MW solve this issue? The explanation here still contains words like “measurement” and “measuring apparatus” that is “prepared”, and so on. Is a proper understanding of decoherence required of the reader in order to understand that in MW these things are rigorously defined?

On a related note, is a “measurement” and the “creation” of two worlds in the MW sense a discrete event, or is it the case that the states drift apart little by little until they can no longer interfere? For the non-mathematician, does it make sense to think of decoherence as something that emerges from the sum of many interactions, much like pressure and temperature emerge from a gas of many particles, but are meaningless when discussing very few particles?

Pingback: Sean Carroll makes the case for the Many-worlds interpretation of quantum mechanics | SelfAwarePatterns

Sean,

Several remarks.

The first is that classical physics does indeed allow us to describe multiple worlds provided that we interpret classical probabilities according to something like David Lewis’s modal realism. When studying the evolution of classical probability distributions, all the states “are just there” in the formalism, so why not simply accept that they exist in reality, as one does in EQM?

My second remark is about axioms. All logical claims consist of premises, arguments that follow from those premises, and conclusions, and EQM is no different. Proponents often suggest that EQM doesn’t need as many axioms as the traditional interpretations. But the trouble with EQM is that although it seems at first like you don’t need very many axioms, the truth is that you do. Simply insisting that we don’t mess with the superposition rule isn’t enough. Quantum-information instrumentalism (say, QBism) doesn’t mess with superpositions either, and allows arbitrarily large systems to surperpose. Declaring that we must interpret the elements of a superpositions as physical parallel universes is therefore an affirmative, nontrivial axiom about the ontology of the theory, even if some people might regard it as an “obvious” axiom.

The pointer-variable argument also implicitly assumes axioms as well. We have to declare that something singles out a preferred basis (for the cat, this means that we need to single out the alive vs. dead basis, rather than, say, the alive+dead vs. alive-dead basis). You can keep adding additional systems and environments, but at some point you have to declare that once you’ve added enough, you can shout “stop!” and pick your preferred basis. And what is our criterion for picking that basis? That’s going to be another axiom! And if you pick locality or something like that for specifying your preferred-basis-selection postulate, you have to contend with the fact that locality may not be a fundamental feature of reality once we figure out quantum gravity, so if we do add locality as part of our axiom for picking the preferred basis, the EQM interpretation is now sensitive to features of quantum gravity that we don’t know yet.

Finally, are you assuming that there’s some big universal wave function that evolves unitarily? Given all we know about eternal inflation, is this a reliable assumption anymore? Even if you’re willing to accept it, it represents another axiom to add on.

The problem with EQM is that this process of adding axioms keeps going on (your “epistemic separability principle,” for example, is another axiom, and far from an obvious one!), and even then we still have to contend with the serious trouble of trying to make sense of the concept of probability starting from deterministic assumptions, a serious philosophical problem on par with the is-ought problem of Hume.

Dear Professor Carroll,

OK, I’m sure you’re not going to like this but seeing the state of the problem of QM meets GR, and the problem of time, I’ll put it out there anyway.

(People seem to say they want paradigm shifts, but don’t actually seem to like them at first sight).

I think that it makes sense in complicated matters to at least be sure our most basic assumptions are sound and logical. So…

The Many-Worlds formulation of QM, like any formulation has to, I assume, ultimately work with General Relativity, or some modification of it, such that the ‘suggested’ “problem of Time” is resolved.

Thus we might typically assume phenomena such as space-time, and retro-causality etc ultimately need to be accounted for and incorporated into any final resolution of GR and QM.

However, surely, GR, and the concept of space-time very much rests on Minkowski’s interpretation of SR, outlined in the famous quote

“The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality”But if we look at

“ON THE ELECTRODYNAMICS OF MOVING BODIES”, i.e the heart of Relativity,Section 1says…“[we must be…] quite clear as to what we understand by “time.”We have to take into account that all our judgments in which time plays a part are always judgments of simultaneous events. If, for instance, I say, “That train arrives here at 7 o’clock,” I mean something like this: “The pointing of the small hand of my watch to 7 and the arrival of the train are simultaneous events.”

And the paper goes on from there… leading to SR and ultimately GR and the ‘Space-Time’ we assume QM happens in, etc

But surely all that is described in this key section of such an important paper, is the fact that aggregates of matter (be it a train on a track, or a motorised hand on a numbered dial) can exist, and be moving or stationary, and their speeds an/or locations can be compared.

Critically (and this is the bit you’re not going to like, but that may also be very important, imo) the paper simply ‘

declares’ that the motion of the hand, smoothly rotating, in one direction, on a numbered dial, not only shows that ‘a hand can rotate’, but (apparently)also, that a thing called ‘time’ exists in some form, and is ’passing’ smoothly, in one direction.Similarly by the word ‘

simultaneous’, it isimpliedbutnot proven(to any degree at all here) that ‘time’, and ‘different times’ exist.But surely the train, the ‘hand’, and anything else in the universe is always just ‘somewhere’ doing ‘something’. (though it may be doing this something at a

dilated rate).My point being if QM has to work with GR, and GR is built on assumptions made in and about SR, can anyone please actually show where in ‘

ELECTRODYNAMICS’, or anywhere else in Einstein’s work, the existence of time, and/or past, or future is actually proven to any level? Rather than just assumed in, (with respect) a rather weak manner, but built on as if proven.If no one can point to an actual proof of time in this context, then surely SR essentially just shows how, if moving at a significant velocity relative to an observer, the

rateat which a moving oscillator… vibrates, is unexpectedly dilated.And very specifically,

(imo) that a thing called time exists. Andnotnotthat a past of future exist, Andnotthat the rate at which a thing called ‘time’ flows between apastandfutureis dilated. And thus also not that the concept of space-time is valid.If this is the case then surely we have no

actualvalid reason to suspectexists. Instead GR may need only describe how4d ‘space time’exists, and how velocity, gravity, and acceleration, can dilate rates of change, and warp and curve space (e.g. even back on itself)… but all just ‘‘3d’ warped spacenow’ so to speak.If my analysis is completely wrong, could someone here please cut/past a clip showing Einstein’s reasons/proof for assuming the ‘watch’ hand does not just rotate, but also relates to the passage of a thing called time. (or anyone else’s proof, rather than unchecked assumption).

i.e Minkowski’s quote clearly references

“The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength”So can someone please point to the

experimental physicsthat Minkowski claims, proves a watch hand is not just a motorised hand rotating, but also actually shows the existence, and passage of ‘time’.(I’m sure this sounds like a very naive request to yourself, but then it should likewise be very easy to do, my point being every expert, and many theories, seem to insist

Einsteins Relativity proves space-time, and thus proves ‘time‘, but all I see at the start of ELECTRODYNAMICS, re time, is aassumptionthat because thingsmove, time, and a past and future exist).I know you’re a big fan of entropy, but without a proof of time, surely entropy is just the observable fact that the universe

isexpanding… and not a proof that a past or future exist, and thus also not a proof that time, exists or flows between these unproven ‘places’, with or without an ‘arrow‘.Like I say, I assume you’re not going to like this post, but rather than just ignore it or delete it etc, I’d really appreciate it if you could at least actually post a link to the established proof that past/future/time exist.

Many thanks, Sincerely,

Matthew Marsden.

I have read recently about the bouncing oil-droplet experiment which may indicate that reality prefers pilot-wave theory, shouldn’t it be considered seriously now?!

Great article Sean.

I definitely tend to lean towards the MWI as well. I was wondering if anyone had seen the recent binge of great articles on a few other interpretations of QM:

Article on the Bohm Interpretation (which I’m also sympathetic with):

http://www.simonsfoundation.org/quanta/20140624-fluid-tests-hint-at-concrete-quantum-reality/

Sciam blog highlights objective collapse theory:

http://blogs.scientificamerican.com/critical-opalescence/2014/06/26/physicists-think-they-can-solve-the-mysteries-of-quantum-mechanics-cosmology-and-black-holes-in-one-go-guest-post/

Both of the above have the benefit of being realist (with respect to the wavefuction), but I don’t think the objective collapse theories indeterminism gets it as far as MWI or pilot wave theory, both of which are fully deterministic.

Any thoughts from Sean or anyone else?

Philosophically it is an interesting question whether multiple worlds are to be considered real and not just a mathematical construction. Does this place a restriction on proving there is no free will? Outcomes are deterministic (assumption), but no prediction is exact. When weighed across all possible outcomes, even a relatively simple system with only about 60 spins, becomes impossible to predict, which is functionally equivalent to not knowing the future, and therefore provides a proof that free will exists for anyone who cares enough about predicting the future, i.e., sentient beings.

Hi Bob,

What is it about the consistent histories description of correlation that you find cringe-worthy? In the CH interpretation, all correlations arise from local interactions, and do not imply any non-local interactions.

Do you have a link to the studies which show there are more consistent histories than consistent histories that look classical? I was under the impression that, because the probability of a history is derived from the projection operators used to construct the history, all non-classical histories would have zero probability.

Here’s the thing: If you assume that QM is correct, then it may be true that the MWI is the “best/simplest” explanation.

But you have to assume that we really understand what is going on in QM, and I think it’s clear we do not. It seems much more likely from my perspective that we are missing something fundamental.

All this talk of Many Worlds, while it could be true, are ideas gone too far, assuming too much, and will be just plain unnecessary when we finally do understand what’s actually going on in QM.

Professor Carroll, I like your argument that MWI is the interpretation of QM with the least baggage, but I believe the ensemble interpretation has even less baggage as it doesn’t assume the wavefunction ontologically represents any individual system, but instead the abstraction of that system. This seems more in-line with mathematics as a description and not fundamental in the universe, the latter position I feel carries more baggage. What are your reasons for preferring the Everrett many worlds interpretation to the ensemble interpretation?

I figure I’ll add this: (I did some searching and apparently Roger Penrose already made almost the exact same argument as I just did in my last post).

“You want a physical theory that describes the world that we see around us. … Many worlds quantum mechanics doesn’t do that. Either you accept it and try to make sense of it, which is what a lot of people do, or, like me, you say no—that’s beyond the limits of what quantum mechanics can tell us. Which is, surprisingly, a very uncommon position to take. My own view is that quantum mechanics is not exactly right, and I think there’s a lot of evidence for that.”

Daniel,

An interesting paper on the ensemble interpretation.

http://arxiv.org/abs/1308.5290

So I speak from total ignorance of the actual meaning Copenhagen Interpretation but it seems there is at least one question I have never seen discussed. Suppose *two* observers. Assume an observation is the receipt of a photon from the superposition in question. Assume (of course) the photons land on the retinas of the observers in question at exactly the same time (and put them in the same relativistic frame of reference to get that out of the way). So:

* We are done. In which case which observation ‘wins’.

* We are not done. Some Na/K channeling starts happening in each observer’s brain. At some point the brains make on ‘observation’ .. I’ve no idea what that means but I can certainly assert (in this case) that it happens at the same time. Now, what happens to the collapsing wave function …. do both brains agree on the same outcome ? Is this a 50/50 thing … if so why (and where can I find a statement of this tucked inside Schrodinger’s equation …)

Colour me confused …

Whether you bring in a conscious observer or replace him/her by a machine, the very idea of an experiment, which depends ultimately on an arbitrary human judgment (whether to do this experiment today or not) resulting in split universes, is metaphysical at best. It is amusing that this idea comes from Sean who does not believe in religion or metaphysics. Why not be honest and say like Feynman that we do not understand quantum mechanics? Period! This is nothing but a copout. It should not be called an explanation. I would rather believe in multiuniverse coming out of chaotic inflation than arbitrary number of universes brought out by human experimenters on an intrinsically probabilistic natural system.

No physical theory has proven itself to be correct – to think that QM is any different is perhaps foolish.

Compare with Newton’s Gravity where the theory is very useful and can still be used for predictions of satellite orbits, whereas its conceptually completely wrong with its action at a distance and other problems.

Is MWT like satellite orbit prediction? In other words can it still be true if QM is wrong? For many of the alternative theories, where collapse is a real phenomenon (connected to a non linearity, etc) the answer is no, MWT would die along with QM in this case.

Are there _any_ ways to replace QM with a deeper theory so that the MWT could remain?

Contrast the MWT with the distribution in results of an experiment – features like this would obviously survive any new theory, as the experimental record is clear.

Its clear that the ‘phase space’ of replacement theories for QM that allow MWT has to much smaller than those that merely predict experimental outcomes.

Where are all the other universes (especially the ones in which women can’t keep their hands off me)? Are they outside our universe and hence are inaccessible to us?

Are new universes being created all the time or were they all created at the time of our Big Bang?

Thanks

Sean, after producing an excellent recent paper on fine tuning in the early universe, why have you endorsed the silliest theory ever created by the mind of man (the Everett multiple worlds theory, which is more properly called the “infinite duplication theory” or the “infinite excess baggage theory”)? I find it hilarious that putative rationalists such as you do debates arguing against life after death (on the basis of parsimony), and then you go and endorse a theory which is the worst violation of Occam’s Razor in the history of human thinking. You’ve jumped the shark, Sean. Which is a pity, because I was citing some of your recent work approvingly. No person who endorses the Everett multiple worlds theory has any business pretending to be more rational than the flakiest astrologer. Please explain at your next debate on the existence of God or the afterlife that you believe in something 1,000,000,000,000,000,000,000,000,000,000 times more extravagant (and vastly less verifiable) than either hypothesis.

http://www.futureandcosmos.blogspot.com/2013/08/you-are-only-you-no-evidence-for.html

My sincere best wishes, and please disassociate yourself from this mental illness that is the Everett theory. Is it any wonder why people reject science when physicists are endorsing this type of nonsense?

Sean, I am happy to believe many-worlds if all that is at stake is the definition of the word “exist” , which is likely ambiguous in this context anyways. But I am uncomfortable with what seems to me a description too closely tied to single-particle QM. For example, most descriptions I see talk about a split between finite number of options following a measurement arbitrarily localized in space and time. Is there a way to tell the story of universes “splitting” for measurements of operators with a continuous spectrum, like most interesting operators in quantum field theory? Do we get continuous infinity of universes, and if so is there some regularized version and some sense of cut-off independence?

Alanl,

I don’t think your description was complete enough for an answer, so I will assume the following.

A photon is on its way through an apparatus such that it will either strike the retina of one observer or the other (with a nonzero probability for either outcome). It’s wavefunction can be expanded in the following complete basis

{|photon strikes oberver A> , |photon strikes observer B>}

According to the Copenhagen interpretation, the wavefunction of the photon is a tool that encodes the probability of the photon hitting either observer. Both observers will agree on the outcome of the experiment. The “collapse” of the wavefunction is not physical. It is instead an “update” as the observers measure the photon.

According to the Many-World interpretation, there is a universe where the photon strikes observer A, and there is a universe where the photon strikes observer B. The wavefunction does not merely represent a recipe for calculating probabilities. It instead is a more direct description of a reality that includes these two universes.

People do not like the copenhagen interpretation because it does not include a description of the “actualization” of an experiment. Instead, quantum mechanics is seen only as “a matter of relations between phenomena and observations of their frequencies”[1]. A theory that gives a priori probabilities of experimental outcomes, but does not describe how all but one fade to zero.

I would not strictly agree with Sean’s claim that the MW interpretation is the most economical interpretation. I would instead say it is the most economical interpretation that also includes an ontology for actualisation (namely, all possibilities are real, and persist). But I am not convinced quantum mechanics owes us such an ontology. Furthermore, I am not sure it is wise to infer an ontology from our methods of calculation in a regime that is clearly alien to our classical sensibilities. The MWI offers a description of reality independent of experimental phenomena – an ensemble of deterministically evolving possibilites, each of which are as real as the other – but we have no reason to insist such a description of reality must exist.

[1] http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.64.339

Dear Sean, I don’t see how postulating a vast number of (hidden) worlds, with additional postulations on how the worlds split, is simpler and better than just postulating a single pilot wave as Bohm suggested.

Shodan says:

June 30, 2014 at 4:43 pm

Hi Bob,

What is it about the consistent histories description of correlation that you find cringe-worthy? In the CH interpretation, all correlations arise from local interactions, and do not imply any non-local interactions.

——————————————————————————

This I find cringe worthy;

Is quantum mechanics nonlocal?

This depends on what one means by “nonlocal.” Two separated quantum systems A and B can be in an entangled state that lacks any classical analog. However, it is better to think of this as a nonclassical rather than as a nonlocal state, since doing something to system A cannot have any influence on system B as long as the two are sufficiently far apart. In particular, quantum theory gives no support to the notion that the world is infested by mysterious long-range influences that propagate faster thaan the speed of light. Claims to the contrary are based upon an inconsistent or inadequate formulations of quantum principles, typically with reference to measurements. (Also see measurements, Einstein-Podolsky-Rosen.)

http://quantum.phys.cmu.edu/CHS/quest.html#nonlocal

_________________________________

We all know you can’t use the measurement correlations of entangled particles to send information faster than the speed light, but that doesn’t mean there isn’t a non local process involved. That’s what Bell’s theorem proved by the failure to get the predicted inequality in the measurement records. The above just wishes the problem away.

Shodan, that was a very good recommendation, it was a nice summary of the ensemble interpretation. Reading that and I think you’d get a good sense of why I prefer it over many worlds and why I think it’s the more philosophically “simple” interpretation.

I agree with you in your separate comment that I would not assign the wavefunction any fundamental or ontological meaning nor do I believe quantum mechanics on its own implies that we should.

I would also like to add as a separate point that many worlds does not imply that a branching mechanism occurs nor that one is necessary. There are simply many universes of potential outcomes, some that share history up until one such branching point. The relative number (technically measure, not number) of universes with the same outcome for a given event (assuming shared history up until that point) is equal to the probability of an observer being in that particular universe.

Shodan writes

Do you have a link to the studies which show there are more consistent histories than consistent histories that look classical? I was under the impression that, because the probability of a history is derived from the projection operators used to construct the history, all non-classical histories would have zero probability

______________________________

Yes here’s the reference

Properties of Consistent Histories

Fay Dowker, Adrian Kent

(Submitted on 17 Sep 1994 (v1), last revised 30 Jan 1996 (this version, v4))

Here is a response to this.

Equivalent Sets of Histories and Multiple Quasiclassical Realms

Murray Gell-Mann, James B. Hartle (Santa Fe Institute, Los Alamos, and University of New Mexico)

(Submitted on 8 Apr 1994 (v1), last revised 5 May 1996 (this version, v3))

Bob,

I would emphatically object to the insistence that the intepretation involves some ambiguous non-local “process”. The CH interpretation explicitly rejects the attachment of any physical process to the reduction of the wavefunction. Bell’s theorem did not prove the existence of non-local processes. Instead, it proved that any realist interpretation of QM must involve non-local processes. (See section 8 of this paper: http://arxiv.org/pdf/1308.5290v2.pdf )

Daniel,

I completely agree. The “branching” of worlds is effectively an increase in hamming distance that occurs via decoherence, a unitary process. No new mechanism is postulated.

Shodan says:

June 30, 2014 at 10:17 pm

Bob,

I would emphatically object to the insistence that the interpretation involves some ambiguous non-local “process”. The CH interpretation explicitly rejects the attachment of any physical process to the reduction of the wavefunction. Bell’s theorem did not prove the existence of non-local processes. Instead, it proved that any realist interpretation of QM must involve non-local processes. (See section 8 of this paper: http://arxiv.org/pdf/1308.5290v2.pdf )

____________________________

I think logic compels us to view the failure to get the inequality devised by Bell in the correlated measurement record for the entangled particles to accept the fact that a measurement of one particle affects the measurement result of the other instantly. The whole premise of Bell’s inequality is based on the absence of non local influences. Of course since these measurement events are space like separated which particle effects the other is frame dependent, but I don’t think this provides an escape from accepting a non local effect at work in QM.

Bob,

I wouldn’t take action at a distance to be the necessary conclusion of bell’s inequality, as such a violation of the inequality can be derived from experiments which take place only in unitary representations of SU(2), which as a group contains no intrinsic concept of locality.

Pingback: Measuring the ‘reality’ | The Great Vindications

Bob,

I would instead say the logic compels us to accept either-or. If we reject a realist interpretation of QM, then any obligation to suppose non-local interactions leaves with it.

Bell’s theorem is

“No physical theory of local hidden variables can ever produce all of the predictions of QM”[1]

which is very different from

“No local theory can ever produce all of the predictions of QM”

In fact, I have previously come across a Nature paper claiming the choice is even more restricted, and non-local realism must be rejected[2] (Though I have not thoroughly read it yet.)

[1]http://en.wikipedia.org/wiki/Bell%27s_theorem

[2]http://arxiv.org/abs/0704.2529

Shodan,

sorry I wasn’t too clear — I guess thatis the nature of this beast, but you say:

“I don’t think your description was complete enough for an answer, so I will assume the following.

A photon is on its way through an apparatus such that it will either strike the retina of one observer or the other (with a nonzero probability for either outcome). ”

My point (question, whatever) is that it seems to me that all Copenahagen Interpretations begin this way — ONE observer, one photon. But our putative S.Cat reflects many photons when the box is open, so certainly there must be cases where two photons (which somehow carry the notion of an observation) hit whatever they have to hit in two different observers (in the same frame of reference) to become an ‘observation’ at exactly the same time – albeit in two different observers. So, do they agree on the outcome ? — it would seem so in the standard interpretation, but I cannot fathom why this should be so (50/50 independently for each observer, surely). If one observer got the photon even slightly before the other, then I guess he takes the other’s superposition down with him, but in a spot-on-dead-even-tie (don’t know what that means, but anyway) either:

* They both agree (and then what happened to 50/50 independently ?), or

* They both have their own interpretations of the result, and in the 50% of the cases where they disagree the universe forks

I’m not sure I like either of these ..

The reason why I don’t believe in many worlds interpretation is that I don’t see it around me. The universe I observe is the universe where all quantum effects cancel out nicely. If that interpretation is true, there’s countless universes where 3D cinemas went bankrupt because polarizing glasses stopped working. Because photoreceptor cells in my eyes are measuring all those photons coming through them one after another. And despite the sheer size of that ‘true path’ universe set where things go on fine, I just refuse to accept the idea that somewhere out there is a copy of me unable to watch 3D movies because the left 3D glasses filter stays mostly dark all the time for no apparent reason, just to make our universe complete.

Dear Sean,

Thank you for your clear article on EQM. There are some things on which I would really like some more exposition:

“We wouldn’t think of our pre-measurement state (1) as describing two different worlds; it’s just one world, in which the particle is in a superposition. But (2) has two worlds in it. The difference is that we can imagine undoing the superposition in (1) by carefully manipulating the particle, but in (2) the difference between the two branches has diffused into the environment and is lost there forever.”

“The difference is that we can imagine undoing the superposition in (1) by carefully manipulating the particle, but in (2) the difference between the two branches has diffused into the environment and is lost there forever.”

There seems to be some additional postulate (physical or at least philosophical) to the quantum formalism that you introduce here. In principle, if the superposition is diffused into the environment then, due to the unitary nature of quantum mechanics, the inverse time evolution should always be possible. The off-diagonal terms in the density matrix will never be truly zero, just for all practical purposes (decoherence). (Similar to discussions of irreversibility in statistical mechanics) In other words, according to quantum mechanics two universes should be able to sometimes “fuse” back into a superposition. Furthermore, there seems to be no rigorous point that one can say a new branch universe has been created if mathematically one can only say that on a

locallevel it seems as though the two possibilities of a subsystem of our universe become increasingly less correlated.To me it seems that at some point when superpositions become very non-local and extended throughout the environment, you suddenly postulate different universes! I think this does not make sense. You say that because of decoherence they become independent, however, this is for all practical purposes and only on the local level it seems this way. Surely you must agree that the complete superposition is still

out therein this single universe?Yours sincerely,Jasper

My apologies for the problems with emphasis in the reply above.

A question which really puzzles me about the Everett interpretation: can probabilities of superposed states be irrational, for example 1/pi? If so how will the universe split into distinct number of sub universes? Perhaps my premise is incorrect, not sure, any comments much appreciated. Thanks.

Shodan says:

June 30, 2014 at 10:51 pm

Bob,

I would instead say the logic compels us to accept either-or. If we reject a realist interpretation of QM, then any obligation to suppose non-local interactions leaves with it.

___________________

Well that’s the crux of it, this is my problem with instrumentalism. While I acknowledge the great value of the mathematical formalism developed in the Consistent ( Decoherent) Histories interpretation, I think non realism is a bad philosophy.

Sean Carroll, you sold me on EQM. I like the decision theory approach; it gives me flashbacks to Fermat’s Principle.

“(spin is up ; apparatus says “up”)

+ (spin is down ; apparatus says “down”). (2)

The plus sign here is crucial. This is not a state representing one alternative or the other, as in the textbook view; it’s a superposition of both possibilities. In this kind of state, the spin of the particle is entangled with the readout of the apparatus.”

I’m not sure I follow here. By “entangled” do you mean that the up or down indication on the apparatus is what determines or partially determines the spin?

From an ongoing discussion

My personal view is that there is a dynamical process when the measurement occurs, that results in a particular outcome. It might involve a change in the SWE and/or non reversibility. But I also wonder whether such a process, if it could be defined, would make QM deterministic and/or be a violation of the prohibition of a local hidden variable. So it is possible that no such dynamic description exists. However, my ignorance of the measurement process does not restrict me from demonstrating holes in woo woo theories such a MW, or to insist that there is an obvious contradiction between countably infinite Rydberg states and the HP..

—————————————————————-

You do realize dynamic collapse theories involve an empirical and not an interpretation question. If you don’t understand this you’re hardly alone , this seems to be a common confusion. Were there empirical evidence of a collapse, we wouldn’t be arguing any more. With regard to your woo-woo comment, there are only two grounds for rejecting many worlds. One , scientific models only have instrumental value, one can never draw any ontological conclusion from them, or two QM as presently understood is wrong. The first assertion is just bad philosophy in my opinion and the second involves a new theory of QM for which there is currently no evidence for.

Bob Zannelli

Sean I think you should steer clear of championing the MWI. You could be left looking like a chump peddling woo. Look at work by Aephraim Steinberg et al and Jeff Lundeen et al:

http://www.physics.utoronto.ca/~aephraim/

http://www.photonicquantum.info/

Hi Sean. I’m trying to resist the urge to point out that what you say about (so-called) hidden variable theories here is somewhat misleading and unfair.

(OK, I can’t actually resist, but I’ll bracket it in parentheses… As pointed out already by EPR, QM is already nonlocal without hidden variables, so it’s misleading to suggest, as you do, that non-locality is a price one pays for introducing hidden variables. And then it’s unfair to suggest that there is no evidence for the hidden variables. As I’m sure you understand, the “hidden variables” in the dB-B pilot-wave theory are the evolving positions of the particles that compose the world we see around us. To to say that there is no evidence for the actual existence of those particles is … weird. The evidence is literally all around us all the time. Of course, other theories may attempt to explain what we see in different ways. So it’s not like just looking at a table proves conclusively that the Bohmian particles exist. But saying there is *no* evidence for them is really quite wrong. Probably it’s a result of taking the phrase “hidden variables” too literally/seriously. Bell was right when he described this terminology as “historical silliness”.)

But here’s the point I really wanted to raise. Basically the question is: what is the ontology of Everett’s theory? That is: what exactly does the theory say *exists*? To clarify what I mean, let’s start with what you say about classical mechanics: “Classical mechanics describes … a universe as a point in phase space.” That, I take it, is a kind of shorthand. What classical mechanics (of, for simplicity, let’s say, particles) describes a universe as is a collection of particles with positions and velocities. The particles live in a three-dimensional space (or a 3+1 spacetime). This is what the theory says the world is made of. The point is then that the state of this world can be *mathematically represented* as a point in phase space. But we shouldn’t confuse that abstract representation with the literal description of what, according to the theory, the world is like.

But then this focuses the question about what the world is made of according to Everettian QM. I take it that when you say things like that the state of the world is given by a wave function in Hilbert space, that is the same kind of shorthand that is involved in saying that, classically, the world is a point in phase space. So the question is: what is the literal, non-abstract way of saying what the world is like for Everettian QM, that corresponds to “particles moving around in 3D” for classical mechanics? That, to me, is the essential question that must be answered before one can make any progress on any of the other things: is what the theory says crazy or natural, should one be scandalized and refuse to accept it or instead embrace it, does Everett avoid the nonlocality that you complain afflicts hidden variable theories, etc.

If we refrain from any philosophical arguments, there is a mathematical argument why MWI is completely unnecessary. In QM we can combine two systems into a larger one by using the tensor product. In particular a quantum system can be considered with the measuring device and the environment as follows: |psi> otimes |measurement device + environment>

The tensor product is a commutative monoid, but can we upgrade it to a group? If only we would have an inverse operation… If this were possible then we can model the measurement process in a pure unitary way using the group inverse operation which will “collapse” part of the wavefunction:

|cat>\otimes |poison+environment> -> |dead cat>\otimes |poison released + new environment state>

But can we construct a group from a commutative monoid? YES WE CAN if you have an additional ingredient: an equivalence relationship. In math this is called the Grothendieck group construction. Do we have an equivalence relationship in QM? Indeed we do and it comes from a swap symmetry (http://arxiv.org/abs/1305.3594): what a quantum system can evolve unitary over here can be undone by another unitary evolution of the environment over there. Welcome to the world of envariance and quantum Darwinism. Zureks approaches the problem using physical arguments, but there is a rigorous underlying mathematical framework behind the scene.

Just like early on in special relativity people talked about “imaginary” time direction (remember ict?) because they ignore the underlying mathematical structure of the metric tensor, in QM people talk about collapse and MWI because they ignore the underlying Grothendieck group.

And by the way, in MWI there is also a pink elephant in the room: why is the split happening only after decoherence? (If not, MWI has a base ambiguity problem) Is much more natural to say that the split happens randomly regardless of decoherence. But what do you get in this case? This is no longer MWI, but GRW theory.

Bob, you’ve said that you think “non-realism” is a bad philosophy. That may be, but arguing that manipulation of mathematical symbols to match empirical results can inherently be realist is a leap I’m not willing to take. To me science is descriptive and never ontological. If you’re going to have a realist interpretation of the wavefunction, I personally would like to see some justification for doing so.

It’s funny because in normal language we would never assume our semantics

arethe objects, that “horse” is the same thing as a horse, we understand the word represents the concept. Yet when dealing with physics we have no problem saying that x(t)isthe position of the particle or that Ψ(x, t)isthe ontological state of a system. From a language point of view, assuming our symbolic representation is the ontology of the system, on the face, is absolutely absurd. Unlike formal language, the syntax of physics lines up with empirical results and even can make predictions, and perhaps this throws us off, but it’s not a reason to assume realism of our particular representation.Sorry, the last sentence of my last comment was supposed to read, “unlike informal language.”

Dr. Carroll:

Does the MWI imply our universe is not adiabatic? If the other worlds are here now, are they still part of our universe that can come back or forbidden from interfering again (conservation of Energy)?

I am shocked to see Bohm and TIQM got 0% and Copenhagen was so high given the Afshar Experiment. Does the Afshar Experiment also falsify MWI?

I like Bohm – see title=“Photon diffraction and interference (pdf format)”. Problems with the pure Bohm are where does the pilot wave originate and how can a single photon in an experiment show interference patterns. I think TIQM holds the key to solving these issues.

Thanks Jesse Emspak.

Dustin Summy – we measure only particle interactions. Like the gravitational ether of GR. We measure only what matter does.

John Hodge

When the universe splits, how does it do this? I don’t see what law of forces allows you to simply take a bunch of stuff (ie, the universe) and clone it instantly. If the universe is billions of lightyears across and it clones instantly, doesn’t that violate the speed of light?

Am I right to assume, that humans are like Gods because, we create Universes wherever we look? Curious, how many critiques of MW go unchallenged.

Imagine that some scientists, who deny teleology or the existence of anything spiritual, believe that every human being is the Creator of trillions of independent universes!

If the vacuum could be warmed by theory as easily as the blogs tend to heat up this one, we would have multiverses everywhere. Oops! We don’t, and Schrodinger’s cat died ages ago as I confirmed on a visit to Alpbach.

Another spark to add to this MOC (figure that one out, sportsfans). Everything, reality and beyond, should behave the same whether or not “intelligent” beings are there to “observe” the outcome of events. Humans themselves do not affect the universe’s laws. An ignorant ape may observe a two-slit experiment just as well as a physicist. The outcome is the same, whether or not there is an observor. Any simple scattering experiment of particle-on-particle would generate a continuum of worlds, and this happens without an observor.

Sean, it’s hard to improve on the basic critique of the decoherence idea that Leggett made in 1987, but here are some points:

1. The basic argument from the density matrix is circular. You place observed probabilities as part of the DM as fait accompli. They are taken for granted from experimental evidence without showing how they would arise. This is not just about explaining why those particular statistics, but the whole idea of presenting “outcomes” as localized alternatives along with the wave functions that lead to them (as also criticized by Penrose.) Hence, comparing statistics under coherence and decoherence is pointless. It doesn’t show why an extended wave would remain as such under coherent conditions, but be as “classical” hits when exposed to decoherence. You have already taken the “hits” (specific exclusive interaction with yadda detector etc.) for granted, and then are just comparing different patterns of them. Indeed, we could ask, why do we observe the pattern of *hits* that display coherence in such cases at all? Instead, think that coherent waves are somehow collapsed to produce a strike-pattern corresponding to their orderly distribution, whereas incoherent waves produce a disorderly pattern of strikes, when they are “collapsed” Otherwise, it would just be orderly versus disorderly wave patterns, period.

2. Entanglement in this sense is just the consistency that if the wave ends up localized at X then it cannot be localized at Y, and so on. Well sure, but that consistency requirement is not a way to get both outcomes in any sense at all, nor explain why for example the total electric fields from alternative trajectories decided for a “Schrödinger’s Coulomb” would not both be effectively present in space, when “intereference” has nothing to do with detecting that. Indeed, the whole mistake of pretending that alternatives “wouldn’t interfere with each other” seems based on wrongly conflating the specific, optics-derived meaning of “interfere” (simply, that amplitudes add – well of course, orderly or not as may be…) with the common broad use of “interfere” as in, to affect in any way at all. And, general “interference” say of the phantom “other outcomes” would not be about being able to “undo” the disordering of the waves describing them etc, why would it – it’s a matter of actual physical interaction overall.

3. Despite the handwaving, MWI does violate conservation laws. A wave function normally describes a particle of given mass and charge etc, showing its momentum in space. For the whole of that mass etc. to be effectively found in more than one place is a completely different issue than e.g. to have a doubly-peaked WF, which still “represents” one unit of mass etc. Indeed, the process “all told” multiply instantiates a particle’s entire mass, charge, etc; each time it could be localized in alternative places. That might as well be from a mysterious “measurement” anyway. Such multiplication is not authentic continued Schrödinger evolution,

4. A certain type of MZI with three beamsplitters may be able to recombine the data of initial amplitude differences supposedly lost to decoherence. See the name link.

Well that’s all for the time being.

3. Despite the handwaving, MWI does violate conservation laws. A wave function normally describes a particle of given mass and charge etc, showing its momentum in space. For the whole of that mass etc. to be effectively found in more than one place is a completely different issue than e.g. to have a doubly-peaked WF, which still “represents” one unit of mass etc. Indeed, the process “all told” multiply instantiates a particle’s entire mass, charge, etc; each time it could be localized in alternative places. That might as well be from a mysterious “measurement” anyway. Such multiplication is not authentic continued Schrödinger evolution,

___________________

Conservation laws are about measurement. MW’s doesn’t predict energy conservation is violated during measurements, hence there is no energy conservation issue. This is a red Herring. It’s like arguing that a undetected particle can’t be multilocal because otherwise there would be a violation of conservation laws, charge, energy, momentum and angular momentum. I don’t have a clue what you’re talking about otherwise so I don’t offer any opinion.

@kashyap vasavada who said:

“Whether you bring in a conscious observer or replace him/her by a machine, the very idea of an experiment, which depends ultimately on an arbitrary human judgment (whether to do this experiment today or not) resulting in split universes, is metaphysical at best. ”

You’re assuming for no reason that Sean’s description implies that somehow *only* human-conceived experiments cause decoherence. That isn’t true, it happens constantly whenever quantum systems (which is all systems) become entangled. Conscious observers don’t need to be involved at any level. A human experiment is in this context just a way to describe a scenario that could be set up, and then describe what would happen. It’s as silly as if I were to say “If I drop this stone, it will accelerate toward the ground at 9.8m/s^2″, and you took that as me implying that humans cause gravity.

Personally most of my mental objections to EQM go away if instead of thinking of it as a universe constantly multiplying into many universes (zomg, conservation of energy!), I think of it as one universe that *divides* into subsets that can no longer interact (same amount of state as before, no CoE problem). Which, as Sean explains so well, is really the right way to think of it. Anyone comfortable with Relativity has already accepted that there are subsets of the universe whose further evolution cannot possibly affect other parts. Why is a universe divided by regions of space-time more acceptable than a universe divided by self-consistent sub-sets of states of entangled systems?

Now while on the one hand I side with everyone who says that we just don’t understand the quantum world and if we ever do, maybe then we’ll have a new picture that better explains — I will not say “makes sense” because I doubt the next theory will respect our intuition any more than QM. On the other hand I don’t think that hypothetical future possibility justifies avoiding considering the problem now. One thing I really respect about EQM is it asking “Okay, but what if QM really is a correct description, what does that imply?” Rather than sweeping the issue under the rug like Copenhagen and adding time-irreversible operations to the theory for the sole purpose of keeping QM contained inside little isolated microscopic systems and ourselves nice and classical. Even though that’s exactly what I do whenever I think about QM outside of the context of discussions like this one. 😛

Totally out of my league here, but what do you (Sean) mean that the Universe “splits”? You mean an instant copy of our Universe is created somehow, separate from our own, whenever a quantum event occurs which could have two outcomes? A copy of our entire Universe?? What mechanism has been proposed for this? Is that even science? I mean presumably these instant copies, which must spring up zillions of times per second just in my backyard, are not “knowable” to us in this timeline, right? Can a theory for how this occurs or how it could be tested, even exist? Or is this not actually science but, well, something else?

How is a new copy of our (perhaps infinite) Universe created? Where is this Universe created? At what new coordinates in the multi-verse does it spring into existence? I thought Universes were created via Big Bangs. Is this a short-cut?

You say there are fewer assumptions needed for this interpretation, that it is already part of the equations. It seems to me that this interpretation requires many, many new answers and hence new assumptions.

@Travis Norsen, who said:

“does Everett avoid the nonlocality that you complain afflicts hidden variable theories, etc.”

I do not think that is Sean’s complaint about hidden variable theories, as there are plenty non-local QM interpretations and doesn’t give that as a critique of e.g. Copenhagen. His complaint is that this unspecified variable is being added to the theory at all. The original purpose of this addition was to preserve local realism in the face of quantum entanglement, but the experimental evidence suggests that this cannot be true, and so now the “reason” to add the variable is gone.

Everett is local, though, and maybe he counts that in its favor, but it doesn’t come across that way.

@Jens who said:

“Totally out of my league here, but what do you (Sean) mean that the Universe “splits”? You mean an instant copy of our Universe is created somehow, separate from our own, whenever a quantum event occurs which could have two outcomes? ”

No, he doesn’t mean that. And realizing that’s not what is meant is I think is the biggest obstacle to accepting the theory (not accepting as undoubtedly correct, but as a valid idea that could be correct).

Think of a particle in a superposition. Do you accept that we can have an electron that is in a superposition of the states “spin-up” and “spin-down” without having two electrons, one up and one down?

Now imagine that when your apparatus interacts with the electron in a way that determines spin, the superposition does not suddenly go away, but continues with its state now entangled with the apparatus. The electron-apparatus system is now in a superposition of “spin-up measure-up” and “spin-down measure down”. Then you check the output of the apparatus, and now you are entangled with the system and it exists as a superposition of “spin-down measure-down you-read-‘down’-off-the-screen” vs the same but “up”. You don’t see a superposition of states, because each state of ‘you’ is only compatible with a subset of the states of the rest of the system. And so on with the rest of the room, the earth, etc.

In all of this nothing was “copied”, nothing was “created”. It only *didn’t* destroy half the states, they just became unavailable to you as you became entangled with the system. The universe didn’t multiply, it divided.

Bob, that is an ironically contradictory answer. If you say the conservation law doesn’t matter in this case since we can’t observe the other outcomes, then we’re not even supposed to believe in them either. It’s silly to have one standard “I can believe the other worlds ‘really exist’ because I have an ideological bent to avoid weirdness/real-randomness or whatever” but “we don’t have to worry about any absurd *consequences* of the concept since we can’t observe them”- ! And likewise to have a theoretical basis for the continuation of the states, but to ignore the theoretical basis for total mass-energy etc.

Anyway, it’s not just a legalism about being able to “get away with” the violation, where is the theoretical justification for the extra mass energy to be predicted “to exist” even in that unobservable sense? I say, if the conservation law is made irrelevant, then the concept itself is irrelevant and vacuous as well.CB

CB, that is not an answer since it just takes “entanglement” for granted without trying to figure out what makes it tick and how it fits in to observations. Nor does it give any idea whatsoever why a continuation of say, two states should effectively produce statistics of e.g. 31:69 instead of the 50:50 from having “two” streams instead of one. Entanglement is just our post-facto finding of consistency in the outcome, that if the particle is found “here” it is not also found “there.” The entanglement for two connected particles is rather different, such that if one particle is found “up” then the *other one* will also be “up” etc. But that provides no “picture” of the process or explanatory framework about what happens when the presumptive wave is localized in one spot, and how all of that mass-energy could be multiply localized in many spots, which is not at all the same issue as consistency over the whole.

Note that a superposition can indeed be found as such: superposed H and V polarization of same phase make for a diagonal polarization that is specifically passable by a filter at that orientation (that either state by itself has only a chance of passing.) Also, the simple idea of “continued states just keep going but are separate” ignores the pretense of the argument that decoherence (phase confusion) makes them non-interacting, but I explained why that argument is fallacious and there is no real reason for disorderliness of waves to constitute physical interaction as opposed to the precise definition of “interference.”

CB says:

“No, he doesn’t mean that. And realizing that’s not what is meant is I think is the biggest obstacle to accepting the theory (not accepting as undoubtedly correct, but as a valid idea that could be correct).

Think of a particle in a superposition. Do you accept that we can have an electron that is in a superposition of the states “spin-up” and “spin-down” without having two electrons, one up and one down?

Now imagine that when your apparatus interacts with the electron in a way that determines spin, the superposition does not suddenly go away, but continues with its state now entangled with the apparatus. The electron-apparatus system is now in a superposition of “spin-up measure-up” and “spin-down measure down”. Then you check the output of the apparatus, and now you are entangled with the system and it exists as a superposition of “spin-down measure-down you-read-’down’-off-the-screen” vs the same but “up”. You don’t see a superposition of states, because each state of ‘you’ is only compatible with a subset of the states of the rest of the system. And so on with the rest of the room, the earth, etc.

In all of this nothing was “copied”, nothing was “created”. It only *didn’t* destroy half the states, they just became unavailable to you as you became entangled with the system. The universe didn’t multiply, it divided.”

Thank you, CB, for your attempt at enlightening me. I’ve read your explanation 3 times now, but I must admit I still don’t understand. How could a state become “unavailable” to me unless it occurred in a separate Universe?

I agree with you Neil, that you can’t say CoE doesn’t matter if you can’t observe its violation, but unobservable sub-sets of state should be considered real.

What I do disagree with you about is whether Everett does violate CoE. I think the problem stems from having internalized Copenhagen, and seeing a measurement of a quantum system as making the quantum superposition “go away” so now the electron has one definite momentum, so to get to a state where it also has a different momentum you need to clone the electron or otherwise break conservation laws. But that’s not what’s happening in this view. In this view, the electron no more has one definite momentum after measurement than it did before. The only thing that changed is that the distribution of possible momenta is entangled/correlated with the distribution of possible momenta measurements of whatever system interacts with the electron.

If an electron having a distribution of possible locations, momenta, etc prior to measurement does not violate conservation laws, then in Everett it doesn’t do so after measurement because measurement is not a special operation.

Note that, it can’t be just “the same stuff divided into regions that don’t interact.” The WF of a particle reflects a certain amount of momentum, of mass-energy in that wave packet. If I for any reason postulate say, two such packets (due to alternate histories, one where the whole particle was seen to go one way, and another one where the particle went the other way), then those two packets represent a total of “two” mass units rather than a distribution of one unit into e.g. a doubly-peaked wave packet. And so on.

I’m now 72 and that means there are a lot of “me’s” out there. One of them may have won a Nobel prize of some sort. What I want to know is why I’m not in that quantum slice. Why am “I” in this one and not the one where I’ve won a Nobel prize, and how do “I” get there?

@Jen

“How could a state become ‘unavailable’ to me unless it occurred in a separate Universe?”

A good question! Depends on what we mean by “universe”, in part. For instance, consider a black hole. Nothing that occurs inside the event horizon can possibly have any effect on you outside watching the black hole — this is what “event horizon” means. So would you say then that the inside of a black hole is another universe? Well, you *could*, and that’s fine. And I think this is the perspective from which Everett got its unfortunate name “Many Worlds” that at first glance implies the universe cloning whole copies of itself on every quantum interaction.

But another way to look at it is that if you look at the metric (mathematical description of the geometry) of all of space-time, including the black holes with their event horizons and singularities, regions of space expanding away from other regions faster than light, etc, you can call that whole thing a “universe” described by a single metric, but some sub-sets of that universe can’t communicate with each other.

Similarly, you *could* say the alternate outcome of quantum events represents another universe, because you can’t interact with it. Or, you could say that it’s all one universe described by one wavefunction, only sub-sets of states described by that wavefunction are inconsistent and so don’t interact.

@Neil:

Everett takes QM for granted, yes, and entanglement is a feature of it that was inferred before it was witnessed. Entanglement is when two particles interact in such a way that they must have a correlated state (e.g. exchanging momentum, so the sum of their delta-p must be 0), but have not interacted with the rest of the universe in such a way that their state must correlate with it. It’s no different than superposition of states in a one-particle system, just now it’s two particles. Or more. In Everett all that changes is that entanglement isn’t magically zapped away by measurement. Superposition still exists. All the possibilities for the combined system are still present as a single wavefunction, which does not “collapse” to one. You see it as meaning there’s now two wavefunctions describing two electrons for double the energy, but that’s not it, there’s still one wavefunction describing all possibilities, so no CoE violation.

The role/meaning of probability in Everett is a real problem, which Sean readily admits.

@CB: No, because if we accept MWI then the behavior appropriate to one entire particle happens “here” and also “there”, when that is not really intelligible as the working meaning of superposition under normal terms – when it’s a scheme to describe spreading out and chances of interaction for a single particle.

Plus have the number of streams appropriate to the number of superpositions (however broken down, which is another problem) then you have no basis for Born probabilities. Sure, Sean et al admits it’s a – nice candor but isn’t coming close to making it credible. The scheme just doesn’t work, and you can even think about things like what is “E” field (not energy) in space from all those possible locations? What keeps that “separated”? Referring to “entanglement” is just hand-waving, it doesn’t explain anything, it’s just what we call the correlation of actual exclusive type outcomes – there’s no “there” there, we don’t know what goes on underneath. Try to work on the double-peak versus two particle wave issue.

@CB:

“Everett is local, though, and maybe he counts that in its favor, but it doesn’t come across that way.”

Well, I’d like to hear first from Sean what he even takes the ontology/beables of the Everettian theory to be. I don’t think you can possibly say that it is local or nonlocal prior to settling that issue. What would it even *mean* to claim that the theory is “local”, if what the theory posits (as physically real) is (exclusively) a moving point on the unit sphere in Hilbert space, or a complex-valued field in some 3N-dimensional (where N is either really big or infinite) space? “Locality” is the idea that physical influences — in ordinary three-dimensional physical space — never propagate faster than the speed of light. If, according to your theory, there is no such three-dimensional physical space (and ipso facto no physical influences propagating around in it at all) it is at best misleading and empty to say the theory is “local”.

BTW, the fact that an incoherent (disorderly) superposition is 1. called a “mixture” because some people think it is “equivalent” or should be and 2. the statistics created *by measurements* applied to the IS are “classical”, doesn’t mean the IS really is a mixture or that explains it behaving like a mixture. Again, we get localization “statistics” out of a coherent double-slit experiment too. An incoherent superposition is just that, both states in a disorderly distribution, and only when a measurement or perhaps it is certain interactions occur, does the state sometimes fall into one outcome, and other times into the other outcome. The “mixture” is not intrinsically there due to decoherence, it is built up from varying patterns of measurement outcomes.

@CB:

I agree that there are situations where there is no conscious observer involved e.g. quantum mechanical reactions were going on in stars long before any conscious being emerged. But that does not answer the main question about arbitrariness. To be more explicit, suppose a professor asks a graduate student to perform a QM expt. Next morning, if the student gets up early and does the expt. then the universe has large number of extra copies including copies of the poor graduate student which were not around the previous day. If he sleeps late and does not do the expt., then there are no more copies. This looks absurd and metaphysical at best. If you are saying that there are already copies made in heavens before the expt. and the student is merely picking a particular branch that is probably worse. If the various branches of the universe are only in our mind that is also bad. Every other student has to get the same results. All the branches of the universe have to combine at the end of the expt. to give probabilistic QM result! Copenhagen interpretation, which talks about our limited knowledge of the QM system, is any day more sensible! How about an engineering student who is doing classical expt.? My main point is that MWI just increases the number of universes without solving any problem, QM calculations are exactly same as before. MWI is completely arbitrarily cooked up to get out of the difficulty that QM has proved impossible to understand after some 90 years of debate! I am surprised that nobody brought up this point in the numerous debates which Sean had with religious scholars and philosophers. I am sympathetic with metaphysics in religious context, but not in physics context.

It’s interesting to me that there is a whole cadre of cosmologists wondering how even ONE universe can be created, while Sean is creating – how many – ? every second?

Travis Norsen’s paper on Bell is worth a read. See J.S. Bell’s Concept of Local Causality on arXiv:

“Many textbooks and commentators report that Bell’s theorem refutes the possibility (suggested especially by Einstein, Podolsky, and Rosen in 1935) of supplementing ordinary quantum theory with additional (“hidden”) variables that might restore determinism and/or some notion of an observer-independent reality. On this view, Bell’s theorem supports the orthodox Copenhagen interpretation. Bell’s own view of his theorem, however, was quite different…”IMHO the many-worlds interpretation

andthe Copenhagen interpretation is unscientific woo peddled by quacks who tell lies, who castigate people like Joy Christian, and studiously ignore work by the likes of Aephraim Steinberg and Jeff Lundeen et al. Beware Sean, because that unscientific woo is a dead man walking. It won’t last much longer. Don’t be left high and dry championing it.wow 2 thumbs up !… and 4 thumbs down…but I`ll take what I can get. I know what i’m suggesting is not high end physics,

but surely it still makes sense to check our most basic rational assumptions, especially where a theory e.g. time and how it seems to be directional at a classical scale and non-directional at a quantum level, seems to end up with so many fractures of supposition, and alternative theories, at the higher levels.

for those thumbs up, and thumbs down, (and anyone else) i’ve put what i’m suggesting we might consider as related to the…

Time travel, Worm hole, billiard ball’ paradox, Timelessly. (re Paul Davies- New scientist article)for example(hope it’s ok to post a link)

mm

John D, I have long been curious about such alternatives to any standard QM interpretation, that say there is neither a real problem nor do we need many “worlds.” You are right to be suspicious of CI, which doesn’t really get under the hood; and MWI, which is fallacious in its framing, and has a structure directly antithetical to the Born probabilities – which fans try to evade with too-clever-by-half handwaving. Yet I don’t see convincing explanations. Joy Christian’s point AFAICT is more about explaining entanglement without unusual non-local influences (and I’m still not persuaded altho it invokes some unfamiliar applications of math), than it is about the basic problem of say the half-silver split photon and how it might end up in one detector on the other, but not both; etc. What is the answer to that basic problem, including the Born probabilities?

can someone explain how EQM doesn’t violate mass/energy conservation? Each time the universe splits, you are presumably doubling everything…Sean says this is taken care of by “superposition of states”, but the quantum states (spin, polarization etc) are bits of information, and do not have any energy or mass associated with them…for example: not only you have to consider the two possible spin states, but also the electron that contains the information about the spin.

also, there is the assumption that multiple states (worlds) exist at the same time (superposition). Doesn’t it make more sense that only one of the states really exists and we simply don’t know which ones it it? And once we make the measurement, one of the possible states is bound to be found. And then we can say that all along, the spin was what we measured it to be…but we just didn’t know it for sure.

I find this old post from Chad Orzel very sensible & clarifying: http://scienceblogs.com/principles/2008/11/20/manyworlds-and-decoherence/

How does Orzel’s explanation fit here? Is it an example of the Copenhagen interpretation?

Wizard, good questions, but the spin can’t “have been that all along” – it’s hard to go through explaining just why in brief, but if you check out some guides you will see. As for Orzel, well he doesn’t overcome the basic flaws that I and others have noted.

Travis,

Without hidden variables, QM is only non-local in the same way classical mechanics (CM) is non-local. I.e. Correlations exist between space-like separated regions. Matching socks are an example of non-local correlation. Sure, QM permits stronger correlations than CM, but it is still a local theory unless you introduce hidden variables.

So this is why you like philosophy — so you can believe in supernatural worlds without any evidence.

Quantum mechanics has been confirmed many times in many ways, but there is not a shred of evidence for many worlds.

Sorry, I wasn’t asking if Orzel’s views are correct. I don’t have the expertise to have an opinion on that. I was just asking how his post fits into the various views discussed here — is it an example of the Copenhagen interpretation, the ensemble interpretation, or what?

JW, Orzel was outlining (not quite as a proselytizer, he seems laid back about which is true) this same sort of decoherence-driven splitting into many “worlds” consisting of continuing superpositions. Copenhagen basically says, we just can’t represent what goes on between measurements, and ensemble is roughly saying we can’t represent what goes on in single cases, “we can only talk about” collections of what they usually call “similarly-prepared states.”

One alternative to MWI that imagines a specific real sequence of events is de Broglie/Bohm mechanics, with particles at specific locations and guided by special “pilot waves.” They claim that can replicate standard QM results, but actually: a deterministic theory cannot in principle simulate genuine randomness, for if you recreate the same initial conditions in the former, the outcome must literally be *the same outcome* and not just the same “chances.” Also, I can’t see how dBB handles random decay of apparently structureless, all-identical muons (nor MWI, for that matter – they aren’t even interacting with anything else, and instead of specific alternative paths, it’s a continuum of increasing chance to decay!)

Sure, Sean is cute. But I think that there are just too many universes!! He infers from the fact that a superposition is represented as one vector in a Hilbert space that it splits into two unvierses. But that the state is represented by one vector means there is only one thing there (even if on a product space).

You know, if Many-Worlds were true, you could have two universes splitting off at different times, i.e. when you do the experiment in universe 1 it takes 10 sec. to get the result [x-spin up>, but it takes 20 sec. in universe2 to get the result [x-spin down>.

I don’t think the basis problem has been satisfactorily solved. Selected bases would, in some time frame, creep into other bases, and eventually a system would observe another system to be in a superpostion.

In half the universes I agree with you, in the other half I disagree with you.

Shodan,

No, ordinary (textbook/orthodox) QM is not local. The measurement axioms (the “collapse postulate”) in particular violate locality. This was pointed out already in 1935 by E, P, and R. This is why it’s a standard talking point for Everettianism that, by getting rid of the collapse postulate and retaining only the unitary, schroedinger evolution, their theory is local. But as I have tried to suggest, it is not really even clear what that claim is supposed to mean when the theory doesn’t seem to postulate any local beables, i.e., any physically real “stuff” (like particles or fields or *something*) in ordinary physical space.

Hopefully Sean will find time to explain what he understands the Everettian ontology to be.

I’d like to point out that Everett himself never said the universe as universe splits. His claim is that the observer (state) splits or branches inside the universal wave function. This is not a trivial matter, since it does not mean that a whole new universe has to pop into existence at every quantum interaction. See:http://www.amazon.com/The-Many-Worlds-Hugh-Everett/dp/0199552274

Folks, sorry to say that indeed “the universe” effectively does have to split (not to be confused with how the enthusiasts try to spin it.) Say I have a quantum experiment with photons, and say 34:66 (sic, not 1:2) chance that will cause an apple to either stay put, or be ejected in a particular direction. Furthermore, our club election has been based on the outcome, for Neil or Cyndi. The moment arrives, and “I” (LOL) see the apple sitting there. Everything from then on has to be consistent with that: people taking pictures of it even far away, talking about it, me reaching out and grabbing it, and so on (yes, I know, after that moment even more “universes” are created in turn per each little fluctuation, but this sets a minimum base for the mess.) That keeps going into what people write about, the entire flow of history works from there: I am club President, Cyndi is not, and so on.

But the “superposition in which the apple was propelled away” is also still real, according to our mystical wizards (because a theory designed to explain *what really happens* – well that’s what normal ones do – ends up predicting *something other than what really happens* (in any meaningful empirical sense.) In that “world” the apple is speeding away, Cyndi is selected President of our club, and an entire different history must go along with that: the club newsletter, speeches to visitors, maybe Cyndi meets someone to marry she otherwise wouldn’t, etc.

BTW, what physical process actually keeps that “other apple” – yes, its entire original mass-energy also at other locations – from hitting stuff in “my world”? Linear evolution? Then how can we ever know of the other streams at all, before or after the “measurements” – that is, any evidence at all of superpositions period? And what does coherence (orderliness) of the superposition have to do, intrinsically, with literal physical interaction?

Well let’s be generous and suppose then, there were indeed two basic streams directly connected to the quantum event affecting the apple. So now we have that irritating 34:66 probability to deal with. Well, the structure of continuing superpositions is inherently inimical to frequentist representation of the outcomes according to Born probability. And the defenses of how MWI can do this are regularly pilloried as being circular arguments, and so it goes.

Folks, did you really think this stuff through?

In the light of MWI I’m left wondering if me wondering about all the possible branches of reality actually creates more branches of reality. Not that me thinking about quantum events actually makes them occur…..unless the branches are infinitely evolving from the moment they branch out from other branches thus sooner, at the present or later resulting in me evolving the ability to control atoms and particles according to the laws of physics by mere thought. It will probably require the aid of a device of some sort, but sooner, at the present or later that technology will evolve. At the very least I expect that me mentioning the possibility of such a device here greatly increases the chances for it to come about in a more solid state in the future.*

I guess theoretically there’s a chance I’m already there, but as a matter of certainty I’m only here.

*Somewhere Deepak Chopra is channeling his arse out. My sincerest apologies, guys.

I hate playing favorites when it comes to interpretations of quantum mechanics, I think they are all a lot of fun to think about. Lets create an ethos space partitioned by the type of interpretation. If you were to measure my interpretation then you would probably find the ensemble interpretation, however, that may change in the future.

An interesting notion is to take MW’s claim of locality seriously. If so, the entanglement (or “splitting” if you will) due to a quantum event must propagate outwards at the speed of light or less (due to interactions with any mediums that get in the way, etc.). It follows then that a photon, released at the time of the event, can move away before the entanglement’s propagation ever catches up with it. That photon would have to “represent” yet another universe and has escaped the splitting of the previous, despite the event being part of it’s “universe’s” history (though we would no nothing about the outcome of that event).

Suppose though our little escapee photon manages to make it quite a ways away, evading the propagation of the splitting, but then hits a very very thick medium of some kind and gets slowed for some odd years. The propagation of entanglement then “catches up” to the photon. But what history now becomes entangled with our escapee? During the time in which the initial entanglement has propagated, many new histories were made.

I suppose the answer to that would be how MW deals with probability, or simply the history of the first photon that “catches up.” Seems intuitively messy to me though.

Not that I’m at all saying MW is out. I’m vigorously entertaining the idea. But it does lead me to weird thoughts such as this.

@Josh July 2, 2014 at 3:50 pm

I think of this interpretation along the lines of cell division. Imagine there is a world line with future and past light cones. Each event has a distinct set of possibilities at the present moment, between the two light cones. This would be analogous to a cell undergoing mitosis at each moment in time and budding off new universes with a shared history. If you deleted a chromosome from a cell and it was still able to divide then the daughter cells would inherit the genetic history of the parent cell. In your scenario the two universes would have the same history up until the point where the polarization of the photon is different or whatever observable we are measuring.

The entanglement would have happened in the shared history of both of the new daughter universes. Therefore there would not necessarily be any contradiction when the photon reaches the previously entangled partner. Do you think in the many worlds interpretation that at each moment in time a universe would be created with very low entropy and simultaneously many more universes would go immediately into a heat death?

Travis,

In your response, you tacitly assume a realist interpretation of quantum mechanics. The bread-and-butter interpretations of QM (Copenhagen and Ensemble) do not postulate any physical “collapse” process. If wavefunction collapse is not physical, then it cannot be a source of non-local properties.

See section 8 of this paper

http://arxiv.org/abs/1308.5290

If the wave function isn’t real, just tell me what your theory says is real. Then we can discuss whether the theory is local or not. Of course, Bell already proved that no matter what you say is real, the theory has to be nonlocal to agree with what’s been observed in experiments. So good luck with that.

Interpretation_MW the phase velocity of a matter wave is faster than the speed of light which may make the pilot wave theory appear nonlocal even though it does not transfer any useful information, please see the following paper . It is the group velocity of the wave that transfers information. Is it possible that the phase velocities of two different matter waves could interfere and collapse a wavefunction before the arrival of the group wave?

A question. If an observer has a box containing another observer and a box with a cat in it, does the collapse occur when the inner observer opens the box or only if both observers open their box, or does it collapse only over the box is opened to? How do these nest?

I don’t think the intuition and logic bending aspect of quantum mechanics have anything to do with “additional universes” whatever that could mean when rigorously defined. Indeed it is preposterous for a physicist to postulate something like an “additional universe” when we cannot measure any observable associated with it! To understand Quantum weirdness, you have to understand two things (1) indistinguishability. (2) the nature of non measurable quantities

So, lets start with indistinguishability. You are doing an experiment with an electron, let’s name it Fred for concreteness. You can measure an observable like momentum. Fred’s momentum is X. Five minutes later, Fred’s momentum is Y. Right, but how do you know you have just measured the momentum of Fred rather than the momentum of another electron? Don’t puzzle about it, the answer is you cannot. All electrons are completely identical to each other. And this is not like Heisenberg uncertainty where we can’t tell which is which. I mean if you actually switch out Fred in the physical equations of nature with another electron, you get the same answer for Energy, position, etc, as that of Fred. All particles are indistinguishable, a photon generated at the dawn of the Big Bang is completely identical to one freshly minted at the LHC. If “god” somehow interchanged two electrons in the universe, nothing would change (save for a minus sign), for a Boson literally nothing would change: we could not tell the difference between the universe before or after god switched Fred with Tom, a rather roguish electron that has displeased god. So it’s not always that there are two universes that exist in a superposition and the cat is dead or alive, sometimes it just doesn’t matter whether the cat is alive or dead, they are the same so far as physics is concerned. However, in many circumstances, these superposition of possible physical states is quite real, let’s have a look at that now.

So, yes, for a given one particle state the momentum measurement can return an infinite number of different measurements each with a different probability. And this is not a measurement problem, the particle is ontologically in a state where it doesn’t have definite momentum. Some states such as the finite square well force particles to NEVER be in a state of definite momentum, they are forced into this bizarre limbo. However, I don’t think this means that there are different universes in which the particle is in different states. Things we do in our universe change all of these states! If I have Fred inside an infinite square well in my universe, it I can set it up so Fred is in a superposition of an infinite number of different momentum. So if the multiverse thing is right, there is a universe for each possible momentum in which Fred has that given momentum. But if I as an experimenter alter the width of the square well, I change all the possible momentum states. So that means from my universe I can causally affect other universes, that seems to contradict the point of a universe. If universes are causually connected…they are just part of the same universe. If nothing else that breaks locality.

Much of the time, bizarre irreparable uncertainty about a certain quantity doesn’t mean there are infinite different universes in which the quantity takes a certain value. For example, conservation of particle number. Particle number is not fixed, and special relativity combined with a little QM tells us new particles can pop up at any time if there is enough energy to convert to mass, compton wavelength. Theories of quantum relativity focusing on one particle attempted by Dirac produced untamable infinities of negative energies and other monstrosities. However, if we consider that particle number can fluctuate we get valid quantum field theories of today. And the bonus is that particles minus antiparticles is still conserved thanks to complex numbers and Noether’s theorem. So when we can’t measure momentum well, and since momentum is a concept defined only by empirical observations, we shouldn’t say oh well just invent universes. We need to rethink what we MEAN by momentum, use Noether’s Theorem for that. Its not that position momentum uncertainty is a big problem with measurability, its that the concept of spacetime needs modification. Einstein did it once, and the next Einstein will have to do it again for the Plank scale. Distances and times dont really make sense down there.

The weirdness of QM isn’t something that needs to be explained away, it is a real and beautiful feature of our universe. Virtual particles and zero point energies are manifestations of quantum uncertainty, and they produce the Casimir effect (just google it). These uncertainties are not just oddities to explain away with unprovable theories, but produce real tangible effects here in our universe. It seems as though we are struggling to recover causality by inventing an infinite number of different timelines, each of which are causal. But if we can’t specify which is our universe (otherwise we know exactly how the measurement will come out), we still lose causality. Don’t worry too much about causality, locality, and all the nice logical features of physics seemingly lost to QM. Quantum field theory nicely preserves them, and in fact, all Lagrangians (and equations of nature) to date have preserved locality.

Another question I had: If the many worlds interpretation does take the wavefunction as ontological, how does MWI handle weak measurements, which allow formulation and computation of joint probability distributions of non-commuting observables?

Travis, you have a point. I tend to make a distinction between “real” as in exists, versus “realistic” as in able to be represented in the classical way like EM field theory. I don’t think reality is fully like the latter, it just isn’t “realistic”.

Entanglement: oddly, it seems harder not easier, to get a handle on that in EQM. Say we’re talking entangled photon pairs, where a measurement must be consistent like H,H or V,V. But in MWI, both outcomes happen at both locations. So where is the correlation H:H and V:V? The math doesn’t describe the actual polarization of either photon by itself, it’s a mysterious heuristic way to say that if one shows one way, the other does to. There isn’t even a “realistic” way to describe even the WF of either photon before detection (ie, as a definite polarization, not to be confused with what happens in detection. Normally, we *can* prepare a photon to have a specific polarization, that’s how we can guarantee detection if we want to.)

Dan, weak measurements are IMHO a good point. Also, what about Renninger negative result experiments (Renninger negative-result experiment): one detector registers “no photon” but there is remaining time for the wave function to continue toward another detector etc. So now, the *negative* measurement causes the WF to be “redistributed” rather than fully “collapsed” at a given location. How does that pan out in MWI? Furthermore, we are in the habit of taking detectors for granted as perfect. But what about reliability issues? If the detector registers “yes” falsely etc, what sort of complicated situation does that amount to?

BTW, I don’t think the resolution of Schrödinger’s cat has anything to do with people *looking* at it, either. I think that muons, nuclei etc. really just decay at definite moments. Also, photons really are absorbed by specific atoms, and so on. This is an interactive notion, neither involving “observers” nor intrusion of unusual special collapse-inducing events as in some objective-collapse theories. That’s my intuition, it’s hard to rigorously justify.

Kai, I see your point. However, note a paradigmatic simple case, the photon split by half-silver toward two detectors. If you imagine a “realistic” WF, it has to either go “poof” when either detector *legitimately* registers a hit, or “continue evolving” (which really isn’t: now we have the full energy of the photon at two locations, instead of a WF describing the original amount as spread out over space.) That really is “weirdness” at heart. Oh I don’t want to see it “explained away” either, it should be accepted as the way our universe is, like it or not …

But Princess, about non-locality: I think it has to be an actual influence, even if it doesn’t transfer what we call useful information. That is so the other result matches the first one, which *can’t occur as a result of independent properties* (the whole point of Bell inequalities, correct me if wrong.) If “not independent” then something has to connect them, and of course it’s not just previous real traits either as in Bertlemann’s socks.

Schrödinger’s Cat, as traditionally described, is not a very good model.

In order to properly construct Schrödinger’s Cat thought experiment, we need to imagine a box with a sleeping (or awake cat) and a rubber band gun attached to the lid. In order to determine if the cat is awake or asleep, we have to open the lid, which triggers the rubberband gun. Maybe it wakes the cat, maybe it misses. At this point, we can close the lid and open it again and will obtain the same result.

It is only when we put enough energy into the system to reset the rubberband gun and close the lid for a long enough time that the cat might fall asleep that we can obtain a different result. Both measurement and reset of the system require energy and energy changes the results.

What if probability waves really is what it’s all about? In order to measure the wave, we have to construct a scenario where only a limited set of choices is allowed. Similar to looking at a single frame of a movie or taking a flash photograph of action. We are only viewing one moment. We are not freezing the probability curve, only our view. The probability curves do do not disappear. There is no decoherence.

In this scenario, the future is only ever a set of possibilities, while the past is the only thing that can be determined. This may very well be why the arrow of time flows in one direction. Knowing the past, we can calculate our way toward present, but no matter what we know about position/velocity of a particle, we can only calculate probabilities of where it will go, what it will do in the future. The calculation only works one way.

I put this wonderful and fantastic theory into the “extraordinary claims” department. And we know what Carl Sagan has said about that. Will such evidence ever be produced in sufficient quality and quantity, or will it ultimately rest on faith?

What values of satisfies the above equation?

What about in the following equation?

Instead of debating interpretations should we instead be debating if the i in the Schrodinger equation is needed?

Schrodinger’s Mistress,

The i is definitely needed. Without the i, there is no unitary evolution. You would get crazy probabilities, and no correct predictions of what we observe.

Travis,

You continue to assert an incorrect understanding of Bell’s theorem, which states that no local theory of hidden variables can reproduce the predictions of quantum mechanics. If no hidden variables are postulated, there is no reason to insist any non-locality that isn’t already in classical mechanics. This is a very basic point.

What are real in quantum mechanics are experimental phenomena. Quantum mechanics tells us the probabilities of observing possible results of an experiment. We do so by expanding a state vector in the relevant basis and squaring the resulting coefficients.

Shodan, Travis is right. BT does indeed say that no local HVs can reproduce all of QM. But that doesn’t mean you can forget about there being non-local effects. The phenomena are correlated in such a way, that if they were independent, that could not happen. That means that *no local properties* can explain the strong correlations (like both detectors getting the same result, despite arbitrary random shifting around of pair properties), which means that something non-local has to explain them (if you want explanations at all.) If there were no non-local connections, then the entangled measurements just wouldn’t be able to show the strong correlations (too statistically independent.) Your second paragraph doesn’t account for entangled states, it is an outline of single-state statistics.

BTW, you are right about the “i” – I note the irony, of referring to unitary evolution and “probabilities” together, regarding its purpose.

Bell’s Theorem definitely doesn’t require non-local effects unconditionally. The phenomena of measuring both particles are not independent in entangled states, they’re correlated. One such example of an interpretation that is local but lacks hidden variables is as follows, consider this classical thought experiment to frame the interpretation: If we put one red ball and one blue ball in a basket, mixed it up, then had two people grab one ball without looking and walked lightyears apart, we wouldn’t argue that something non-local has to occur so that when one sees his/her ball is red, the other sees his/her is blue.

This isn’t exactly analogous as I could have looked at the ball at any time along the way including while I grabbed it, my ignorance isn’t physical in origin but rather lazy. However, in an EPR type experiment, you can measure the spin at any distance from the emission source up until the original interaction. The correlation occurs during the particle creation step, and the spin is completely unaffected/interacted with until measurement. There’s no reason to believe that the distance we choose to measure the spin at matters, so we’re left to conclude that the correlation occurred locally when the particle pair was created.

It’s not that one particle’s seemingly independent state is correlated with another’s, not at all, it’s that there’s one overall state in which the spins of the two particles along the same axis are always correlated. In the former case we’re asking what determines, once we measure particle 1’s spin, how particle 2 will correlate as it has to. Any such solution will seem non-local. In the latter we realize the correlation between spin values, regardless of what they are in a given experiment, is fundamental to the state itself and its initial formation, it’s just a matter of sampling it.

Daniel, that is not at all how entangled correlations work. You are describing the classical situation called “Bertlemann’s socks”, which works with initial correlation. But the strong correlations can’t involve that, since they must create correlations for arbitrarily oriented detector pairs. That can’t be handled by prior real properties or LHVs. You need to read up on entanglement and understand that. I am not an expert on this myself, but at least cover the bases so to speak.

“But the strong correlations can’t involve that, since they must create correlations for arbitrarily oriented detector pairs.”

That’s the argument. But then again, the argument is also that the probability distributions we get from one detector must appear to be independent of the the orientation of – or even the existence of – the other detector. If this wasn’t the case, faster than light communication would be trivial. Bell states that local hidden variables are dependent upon the orientation for the one detector not affecting the measurement of the other. I’m still trying to find a good explanation of how one can say that it both affects it and gives the same probability distribution (any help would be appreciated).

It’s pretty sad to see the way you act as if MWI is “the truth” and mock people for not “accepting it”. In reality it is you who refuse to look at what nature and science is telling us (so far). The preferred basis problem has not been solved *at all* as is pointed out by several commenters here. They are also the posts with clearly the most upvotes.

This is a trend I’ve witnessed in your blog over and over. Everytime someone mentions the basis and born rule problem, you refuse to adress it, because you have no answer.

Instead you continue insisting that you have found the truth and that your scifi interpretation is the REAL Science and that people that disagree is “in denial”.

Why can’t you just accept that currently there is no interpretation or theory that gives us an answer and the only thing we know for sure is that it’s none of the current interpretations in their current form. INCLUDING EQM.

Neil,

Both you and Travis are wrong. By insisting the strong correlations unique to quantum mechanics must imply non-local affects, you are tacitly assuming these correlated observables are described by classical degrees of freedom. This is just as bad as assuming hidden variables. Instead, the fact that some observables don’t commute means correlations emerging from local interactions can be stronger than in classical physics. You are simply assuming reality has some fundamental classical component, and then inferring spooky action as a result. In actuality, quantum entanglement is just as natural in the context of the laws of quantum mechanics as the correlation between Bertlmann’s socks.

Neil, if you believe I am mistaken, feel free to clear up such issues in detail. I’d prefer leading with an argument then with a comment on credentials.

That said, the correlation is in the state of the particles. When the detectors interact with the particles, their pointer states will in turn correlate with the state of the particles. That’s what a measurement is (ideally). With both detectors along the same axis, the correlation of the original state will be preserved. However, as you rotate the detectors’ orientation relative to one another, this correlation with the initial state decreases. When perpendicular, all information is lost as a z +1/2 state could register as a y +1/2 or y -1/2 with equal probabilities. This is a geometric effect. The correlation between detectors is due to the correlation of spin operators with each other. Each pair of spin operators that forms the orthonormal basis for spin operators has no correlation (as spin z +1/2 could be correlated with y +1/2 or y -1/2 with equal probability), while each operator has perfect correlation with itself. Any spin operators that are not perpendicular to each other have some intrinsic correlation.

The correlation between the detectors is at best reflecting the original state’s correlation. Off-angle correlations are a result of the embedding of this state in the geometry of the SU(2) spin group. There are no other sources of correlations here, so I will ask you to qualify what you mean by “strong” correlations if they are not initial correlations or the result of joint probability distributions of non-commuting operators (which is a geometric contribution).

I’m unclear what MWI says about actions or experiments that can have multiple (more than 2) outcomes or that have probabilities other than 50:50? For example, consider an experiment whose outcome has a statistical bell-shaped curve? Or the collapse of the wave function of an electron, which has a non-zero probability of appearing anywhere in the universe. What does MWI say will happen?

Neil Bates: re your question, the brief answer IMHO is that wavefunction is real and that detection involves something akin to the optical Fourier transform. You’re using one extended entity, an electron, to detect another, the photon. Detection at one slit transforms to photon to a pointlike entity at the slit so it goes through one slit only. Everything else builds on top of that.

Briefly, re strong correlations: Bell and others worked through the math showing that local properties just can’t produces the kind of results that we get. That is math and you can find the argument all over, you don’t need me for that. It’s your job to show that a long-standing and in-good-standing orthodox proof is “wrong” after all these years. Sure, I shouldn’t be dogmatic but I am not convinced by alternative claims any more than by alternative claims about relativity or there being an aether or whatnot. As for credentials, I merely noted that I am *not* an expert but try to stay abreast of what the theory is saying.

Re: Bell’s theorem, people who think that Bell only showed that hidden variable theories have to be non-local, should try actually reading Bell. He’s quite clear about this, especially in his later papers where he is specifically addressing this long-standing misconception. “La Nouvelle Cuisine” (reprinted in the 2nd edition of Speakable and Unspeakable) is particularly accessible and clear. A careful systematic review can be found here:

http://www.scholarpedia.org/article/Bell%27s_theorem

Travis, such an analysis misses the main thrust of why locality alone reaches the contradiction. The big problem is that the assumption that the orientation of the detectors does not affect the measurement is not warranted because values depend on the set of commuting operators in which the system is characterized. This is the contextuality of the Kochen-Specker theorem. In most frameworks, locality implies noncontextuality and you get Bell’s Theorem. However, there are entire sets of interpretations of quantum mechanics that avoid the noncontextuality of the KS theorem and its connection to locality, like some modal interpretations: http://plato.stanford.edu/entries/qm-modal/

Daniel, I certainly do not agree that modal interpretations provide counter-examples to Bell’s claim that nonlocality is required for empirical adequacy, if that’s what you meant to suggest. (Of course, one can explain the correlations locally by violating the “no conspiracies” assumption, but that’s very hard to take seriously.) But this isn’t really the place to discuss that.

I still want to understand better what the ontology of Everettian QM is, according to our host.

PS, Daniel, I just read through some earlier comments including yours about the red/blue ball. You really need to read Bell’s papers (e.g., “La Nouvelle Cuisine”). It’s clear that you don’t understand what he claimed and on what grounds. You present — as if it refutes his conclusions — something that is a part of his ground-clearing setup.

Shodan this forum supports LaTex, please provide a detailed proof as to why we need an i for unitary evolution because if you cannot derive it then you do not understand it. This forum exemplifies much of the drama in academia. Instead of helping people to make logical sense of the theory it quickly turns into a pissing contest of who is smarter. No one seems to love this subject instead everyone wants to pretend that they know what they are doing. Shodan we cannot teach if you don’t think that you have anything to learn. I don’t listen to what Sean Carroll says because I believe he is a great quantum physicist. I am more interested in how he thinks about general relativity and cosmology.

Travis, I mean only to point out that interpretations exist that conserve a notion of locality and are compatible with QM. Whether you take them seriously or not is immaterial provided their consistency. I don’t personally subscribe to them either, the interpretation I subscribe to would conclude that quantum is nonlocal from bell’s theorem. I accept this is not the only interpretation. If you have an explicit objection, I’m really willing to read it and discuss it. Telling me I don’t understand and providing no further clarification beyond that doesn’t really help either of us out.

The red/blue ball example was not meant to be literally analogous, but rather a toy example to illustrate the role of “distance” in the measurement. Like most analogies though, it seems it’s taken away from my point! What you’re calling “non-locality” would hold irrespective of a Poincare or Galilei geometry. This “non-locality” can be derived whether there’s a speed of light or not. It can be derived even in the absence of an embedding geometry for these particles.

This “contradiction” is derived solely in SU(2) (which is the spin group of the Galilei group), which has no concept of locality that can be violated. It holds for any non-commutative Lie algebra though, we just focus on the spin groups.

Now that I think of it, I actually have never seen a derivation of Bell’s Theorem in SL(2, C) which is the spin group of the Poincare group and thus the correct group to carry out this analysis. Since it contains SU(2)xSU(2) as a subgroup, and regardless is a rotation/spin group, I would think it would hold given the non-commutable basis for rotation operators, but there might be some differences. Do you know of any references that discuss this?

Anyway, since this result is a general feature of the non-commutative algebra of quantum mechanics completely tangential to embedding geometry, it is not hard to imagine that there are interpretations that conserve a sense of locality while still being consistent with this result.

‘The pilot-wave dynamics of walking droplets’

https://www.youtube.com/watch?v=nmC0ygr08tE

What waves in a double slit experiment is the dark matter.

‘What If There’s a Way to Explain Quantum Physics Without the Probabilistic Weirdness?’

http://www.smithsonianmag.com/smart-news/what-if-theres-way-explain-quantum-physics-without-all-probabilistic-weirdness-180951914/#JEoZGUo23dbMGJly.16

“Known as “pilot wave theory” this line of thinking goes that, rather than electrons and other things being both quasi-particles and quasi-waves, the electron is a discrete particle that is being carried along by a separate wave. What this wave is made of no one knows.”

‘Redefining Dark Matter – Wave Instead Of Particle’

http://www.science20.com/news_articles/redefining_dark_matter_wave_instead_of_particle-139771

“Tom Broadhurst, an Ikerbasque researcher at the University of the Basque Country (UPV/EHU), explains that, “guided by the initial simulations of the formation of galaxies in this context, we have reinterpreted cold dark matter as a Bose-Einstein condensate”. So, “the ultra-light bosons forming the condensate share the same quantum wave function, so disturbance patterns are formed on astronomic scales in the form of large-scale waves”.”

“This opens up the possibility that dark matter could be regarded as a very cold quantum fluid”

Travis,

Actually, it is clear that you are the one who does not understand quantum mechanics. As has been explained before, you are insisting non-locality is essential because you are assuming classical degrees of freedom a priori and then attempting to construct an explanation for strong correlations in the context of these degrees of freedom. Instead, you have to understand that in quantum mechanics, unlike classical mechanics, observables have a very different logical structure. They don’t commute. This is a fundamental feature of QM and does not need to be derived from some classical framework. In fact, the opposite is the case. Classical degrees of freedom are an approximation, a limit of the rules of QM. In no way, shape or form, does the orientation of a detector effect the results measured by the second detector.

Once again, I point you to the very clear explanation of such correlations: Section 7 of this paper: http://arxiv.org/pdf/1308.5290v2.pdf

Shodan brings up a good point. Bell’s theorem can be derived as a consequence of the KS-theorem, and one of the necessary assumptions of KS theory is that a value function can be defined on operators in a linear manner. This assumption, whether stated or not, is a kind of hidden variable hypothesis. This is equivalent to arguing in the Bell theorem, that a joint probability distribution can be classically defined as if the two experiments are independent. Clearly the non-commutativity of the operators involved do not allow a reduction to a classical valued representation of their action.

The infinite dimensional Hilbert space is splitting into two separate universes I don’t have long please help me!

Shodan, I read sections 7 and 8 of your (?) paper. There is no explanation of the correlations there, just some confused and misleading talking points that in no way confront Bell’s actual argument. Read Bell.

Like many others I still can’t get my head around the concept of the ‘observer’ as used here. Surely a ‘macroscopic’ object is simply a collection of microscopic objects so I can see no clear distinction between the ‘apparatus’ (as the term is used here) and what it is observing.

How ‘big’ does ‘macroscopic’ have to be.

If the apparatus can be a video camera, then presumably it can be some molecular machine, in principle.

So could the apparatus be smaller than the superposition it is measuring?

For example could the superposition be something like a buckyball and being measured by an apparatus that is a molecular machine that is smaller than a buckyball?

Robin,

In terms of de Broglie wave mechanics and double solution theory ‘observation’ is another term for detection.

In de Broglie wave mechanics and double solution theory the particle is guided by an associated physical wave.

The stronger the particle is detected the more it loses its cohesion with its associated wave, the less it is guided by its associated wave, the more it continues on the trajectory it was traveling.

Bell’s theorem is, of course, a rigorous deductive proof of his conclusion, given his premises. And his premise makes perfect sense in a local realist theory. A lot of advocates of hidden variables, and various anti-quantum crackpots may insist that Bell’s theorem casts a much wider net than this, but they are wrong. Bell himself was wrong on this, to be sure.

Most importantly, the notion of locality (or local causality) that Bell defines only makes sense in the first place if you assume that reality consists in some way of ordinary, commuting numbers that are localized to spacetime points. (e.g. the lambda in his definition of locality.) They may be unknown numbers, characterized by some probability distribution (hence the integral over lambda in Bell’s definition of locality), but they must be present. However, quantum mechanics repudiates this claim about reality in the first place, so Bell’s definition of locality makes no sense as a definition within QM itself.

There is, to be sure, a standard definition of locality in quantum mechanics. Operators rather than numbers are localized to spacetime points, so locality is defined by operators at spacelike separation commuting. This definition makes sense because causal, dynamical laws are encoded via commutation relations; e.g. [A,H] = idA/dt. And relativistic quantum field theory does in fact obey this; it is perfectly local. (It is moreover covariant under Lorentz transformations, which is really *the* hurdle a theory needs to jump to satisfy special relativity.)

Quantum mechanics is Bell-nonlocal, to be sure; it fails to satisfy Bell’s notion of locality. But Bell’s notion makes no sense for QM, so this really doesn’t matter for either QM to be conceptually sensible, nor for it to be compatible with special relativity. However, Bell’s notion of locality makes perfect sense for a theory of hidden variables, so the failure of any such theory to be Bell-local that reproduces QM’s predictions is actually an indictment against the hidden variable theory. This is why physicists say Bell’s theorem rules out local hidden variable theories, despite what Bell’s opinion on the matter is.

Whatthestuff, could you say more about “double solution theory”, and is that a particular species similar to dB-B, or is it a name for the general category (particle and wave both present) of such theories? Note that everyone still debates all this, it does not have a clear consensus or even clear background framing to base from.

Shintaro: Sure, the meaningfulness of BT depends on the beables and what kind of background framing you have, etc. I think of it as a conditional proof. *If* you want to recover a “realist” QM and do imagine local properties in order to explain the strong correlations, then you will fail, etc. Sure you can just blow it all off and say it’s mysterious and not have to represent anything anyway, etc. The BT is a conditional proof of limitations.

Re: Alanl @

June 30, 2014 at 7:41 pm

According to my layman’s understanding of Dr. Carroll’s post, the “splitting” of different worlds takes place such that there is consistency in each of the splits. There cannot be a single, un-superimposed world in which observer A sees the cat dead and observer B sees the cat as alive. Why? because everything in each split-world is entangled (in a QM sense), or can be entangled, and there is no way to entangle (i.e., exchange information) between two inconsistent states. Your thought experiment is basically the same as Bell’s experiment (in which two entangled particles always have the same property no matter the distance between their measurements or which particle is measured first).

Re: conservation laws.

I cannot see how there can be any problem with conservation laws, as these can only be measured in each entangled world/universe, such as the one in which we all find ourselves having this discussion. In other words, if the multiverse had a conservation law so that energy was divided between universes at the time of a split, we would never have observed conservation of energy in our mutual (version of the) universe and we would not have formulated the law to begin with. (Some of you might be then arguing that the version of MWI which applied to such a multiverse was wrong because it violated non-conservation of energy.)

Niels Bates,

de Broglie-Bohm theory is incorrectly named as de Broglie disagreed with it. de Broglie-Bohm theory should be referred to as Bohmian mechanics. In Bohmian mechanics the wave function is considered to be physically real. In de Broglie wave mechanics and double solution theory there is the physical wave which guides the particle and the wave function which is a mathematical construct only and is used to determine the probabilistic results of experiments.

Everyone has a bias

Shintaro, You say that Bell’s notion of locality “makes no sense for QM”. I don’t even really know what you think you mean by that: surely Bell’s formulation “makes sense” (independent of any particular candidate theory) and it is easy to see that QM violates it. But let’s leave that aside. The fundamental point here is that, as Bell pointed out very clearly, the very issue of “locality” is about what a theory says is going on physically: “It is in terms of local beables that we can hope to formulate some notion of local causality.” A theory that says *nothing* about what’s going on physically (but is instead, say, exclusively about what “observers” will “experience”) is outside the domain of the locality/nonlocality issue. Similarly, a theory whose ontology is at best vague and muddled is not ready for prime time: you simply can’t tell whether a theory says that physical goings-on respect relativity’s supposed prohibition on superluminal causation, until it is crystal clear what a theory says the physical goings-on *are*.

If you want to claim that (ordinary) QM is local, you thus need to say clearly and precisely what it says is going on physically, i.e., what the local beables of the theory are, what the ontology is. So lay it out. Tell us what ordinary QM says exists, exactly, and what laws govern the existents’ behavior. Then (and only then) will it be possible to judge whether the theory is local or nonlocal.

I strongly suspect that you’ll balk, claiming that the very idea of specifying the ontology clearly is equivalent to endorsing hidden variables or is in some other way counter to the proper quantum spirit. In that event, we will just agree to disagree. But anybody should be able to see that as long as you refuse to say clearly what your theory says is actually happening physically in the world, your claim that the theory is “local” is, at best, hot air.

Everyone might have a bias, however, there are more and less correct explanations as to what occurs physically in nature. There is also the desire to relate general relativity and quantum mechanics.

‘Redefining Dark Matter – Wave Instead Of Particle’

http://www.science20.com/news_articles/redefining_dark_matter_wave_instead_of_particle-139771

“Tom Broadhurst, an Ikerbasque researcher at the University of the Basque Country (UPV/EHU), explains that, “guided by the initial simulations of the formation of galaxies in this context, we have reinterpreted cold dark matter as a Bose-Einstein condensate”. So, “the ultra-light bosons forming the condensate share the same quantum wave function, so disturbance patterns are formed on astronomic scales in the form of large-scale waves”.”

“This opens up the possibility that dark matter could be regarded as a very cold quantum fluid”

In de Broglie wave mechanics and double solution theory what waves is the very cold quantum dark matter fluid.

What ripples when galaxy clusters collide is what waves in a double slit experiment; the dark matter.

Einstein’s gravitational wave is de Broglie’s wave of wave-particle duality; both are waves in the dark matter.

Dark matter displaced by the particles of matter which exist in it and move through it relates general relativity and quantum mechanics.

I know the debate is the interpretation of the wavefunction, but this is a smart group who might be able to guide me on a different problem. The redshift and acceleration of galaxies away from each other is evidence of a force. However, of the four fundamental forces only two of them, the gravitational and electromagnetic force, have a range large enough to perturb galaxies. The electromagnetic force is the only force that we know of that has an infinite range and is also repulsive in nature. Have we done enough work to falsify an electromagnetic explanation? I expect down votes for this post because of course it is an inflationary dark energy from that explosion of space time and matter. But just for fun what if it is the electromagnetic force? How would you set up the problem?

@Random dx/dt tangent: The problem with using electromagnetism for repulsive force between galaxies is that the matter as a whole is electrically neutral , otherwise we would have been dead long time ago by electrical shocks! All stable atoms and electromagnetic waves are neutral. The positive and negative charges have to be separated before any electrical device works.

Travis. I can certainly agree to disagree with someone who thinks that quantum mechanics, which has been successfully describing atomic, nuclear, particle, solid state, and condensed matter physics for nearly a hundred years is, “not ready for prime time.” Whether you like it or not, there is a notion of locality in quantum physics that does not require any assumptions about local beables (or “ontology” or “metaphysics” or other such nonsense) at all, and it works splendidly. Physics has no need to be held back by outdated pictures of reality, and certainly the word “physically” does not have to mean what Bell wants it to mean. (The way you use the word “physically,” just as the way you and Bell use the word “local,” is deliberately aimed to contradict quantum mechanics. Essentially, you’re begging the question in your dismissal of QM, so your argument is completely vacuous.)

JimV, I answered that defense about CoE in an earlier comment. Sure, if you think that entire new universes peel off then we could imagine that conservation laws only apply in each one and so what etc. But remember, the alleged depiction of MWI is that these are not really other “universes” anyway (proponents like to complain that’s an unfair caricature of their idea, Everett did not call it MWI himself, true?) Rather, the wave function continues to evolve, it’s really all in the same universe. There is a supposed way to keep these from effectively interacting which is IMHO fallacious (roughly, due to wrongly equating optical-type “interference” with broadly causal “interference,” and the pivotal density matrix is confusingly pre-loaded with measurement-based probabilities.) So even if the interpretation is valid, it still happens in “the universe” and all the same laws are supposed to apply.

But this really isn’t continued Schrödinger evolution anyway. Just imagine a beta particle (electron) emitted towards many atoms or ions that could capture it. Each has a chance of grabbing it, so in MWI “all of them do.” But now we have a complete new electron orbital around multiple atoms. Multiple widely-separated orbitals is not a legitimate subsequent state of evolution for a single electron!

No, either the measurement is just as mysterious anyway in order to create entire new multiple effective localizations of charge, mass-energy etc, or the description fails to measure up (so to speak) to what continued SE should imply (which is the spreading of the interactibility of only one unit of mass, charge, etc over a wide area, only until it can be localized later by some kind of interaction/measurement.)

Travis,

Simply asserting that section “7 and 8 are confused and misleading talking points” is an empty assertion. As I have pointed out more than once already, when you say the correlation is not explained without non-local interactions, what you are really saying is without non-local interactions, the correlations in QM cannot be derived from a classical framework. But you have not given a reason that compels us to adopt a classical framework as the more fundamental framework. If QM is the more fundamental framework, then you simply have correlations arising from local interactions that conform to different commutation relations.

Shintaro, As I expected then. For the record, I don’t reject or even dislike ordinary QM. I’m just honest about what it is — namely, an algorithm for predicting experimental statistics which (like, say, Ptolemy’s model of the solar system) has been very successful and is undoubtedly a great achievement. At best, though, its account of what’s going on physically at the micro-level is (to use Bell’s phrase) “unprofessionally vague and ambiguous”. Your attitude seems to be that this is a good thing, and that the very desire to provide a clear physical explanation of things is outmoded, “metaphysical”, etc. If that’s right, then I guess the thing to say is that we have a philosophical disagreement: I think that it is the proper aim of physical theories to give realistic physical explanations, whereas, well, you don’t. So be it. But what I don’t understand is why it’s so important to you to be able to claim that your theory is “local”. The very notion (at least, the one at issue here) is “metaphysical” in precisely the sense that you seem to despise.

Of course, you want to switch to a different issue and talk about “locality” in the sense of local commutativity and/or no signalling. It’s uncontroversial that QM is “local” in this sense. But so is, for example, Bohmian Mechanics. Would you say that Bohmian Mechanics is therefore (like ordinary QM) local and fully compatible with relativity?

Shodan, You’ll have to explain what you mean by “classical”. I think you are trying to suggest that, by demanding a “classical framework”, I am stubbornly and irrationally insisting on a refuted, empirically inadequate physical theory (as in “classical mechanics”). But that’s just not true. I agree that classical mechanics does not provide the correct description of the micro-level, and I certainly do not restrict the scope of theories I consider to those including “F=ma”. On the other hand, what you apparently actually *mean* by “classical” in your polemical remarks is something like: the expectation that physical theories should provide clear physical explanations. But that is not something to be embarrassed about. Probably, for you, these are the same issue and you won’t really understand my point. But if you somehow became convinced at some point that, *because* F=ma doesn’t provide a correct description of electrons, it is therefore impossible in principle to explain their behavior in any clear and comprehensible way, and we must therefore give up on even trying… well, you got swindled.

Travis, when Shodan, Shintaro, or myself speak of “classical,” we’re not referring to Newtonian mechanics, we’re referring to the probability theory. Integral to Bell’s Theorem is an assignment of a classical joint probability distribution. The “classical” assumption hidden in this assignment is that one can give operators a numerical evaluation, and that evaluation will form the basis of the probability analysis that follows. This is called contextuality, and as I’ve repeatedly insist, I suggest you familiarize yourself with Kochen-Specker, which is sufficient to derive Bell’s theorem specifically in the case of spin. Bell’s theorem implicitly replaces the algebra of operators with the algebra of numbers in the probability analysis, this is the reason the inequality is not held.

Daniel et al, I’m perfectly familiar with contextuality, K-S, etc. But all of this is simply irrelevant to the question of whether locality is consistent with empirical data. Locality alone implies a Bell inequality. So we know that locality is false. Now many many other things are also true: locality implies (in certain situations) non-contextuality, locality implies determinism, the existence of joint distributions implies a Bell inequality, and on and on. But none of these other things could possibly refute the simple demonstration that locality –> Bell inequality. In my (rather extensive) experience with people who seem to share your views, the argument basically goes like this: “Bell says locality implies the inequality; but aha! — we can also derive the inequality from some other premises; so therefore there is no need to reject locality in the face of the empirical data, but we can instead reject some of those other premises.” But this is just a juvenile logical fallacy. Locality implies Bell’s inequality; experiment shows Bell’s inequality is false; so locality is false. Perhaps there are other interesting conclusions to draw as well, but doing so cannot possibly undermine something that is already established.

But this isn’t really the place to discuss this in detail. My views on Bell and nonlocality are readily available in my various papers, including the “scholarpedia.org” article on Bell’s theorem that I linked to above and which contains some discussion of these exact points. Anybody who is interested is encouraged to look at those and (as I said before) to read Bell’s own papers, which are a model of both accessibility and clarity.

I’m still hoping — perhaps in vain at this point — that we can return to the original topic and hear from Sean C about his views on the ontology of Everettism. Sean??!??

Travis, that is a rather simplistic reduction of the axioms of a formal argument to informal language. Locality as it is assumed in defining the probability distributions implies noncontextuality. Assuming locality of the operators and not of the elements sampled from the spectrum of the operators will not lead to the formulation of Bell’s inequality. You have yet to show a proof which assumes locality of the operators alone are enough to derive Bell’s inequality. I assure you that you cannot do it without assuming the noncontextuality of the detectors’ arrangements and the resulting probability distributions.

I think the sea of comments is rather polluted and if you raised your questions earlier on, I don’t see how repeating them will raise the likelihood if Sean addressing it. I too would like to know his answer.

I’ll read whatever else you guys want to say here, but I’ve said my piece and will leave it at that.

Re Neil Bates @:

July 5, 2014 at 9:46 am

“JimV, I answered that defense about CoE in an earlier comment. Sure, if you think that entire new universes peel off then we could imagine that conservation laws only apply in each one and so what etc. But remember, the alleged depiction of MWI is that these are not really other “universes” anyway …”

Thanks for the reply. Yes, that is what I think, based on the post here, and the terms “many worlds” and “multiverse”. That is, there is one wave function which evolves over time, but it causes the universe we perceive to bifurcate continuously. Or rather, that this interpretation is “correct” in the sense that it incorporates all known data in a way of thinking, or model, which makes sense to those who use it and therefore gives them a useful physical intuition about QM. (Whether the universes are “real” or not is not meaningful since we cannot interact from one to another.)

If some people mean something else by the MWI, I don’t know what that is, but wonder why they chose to use the terms “MW” and “multiverse”.

Problems with electrons choosing different orbits also seem to be non-problems to me with this interpretation since, again, the different versions cannot interact. And as Dr. Carroll said, the problem stems from the superposition principle in QM itself; as he goes on to say, the MWI (as I understand) it eliminates that conceptual problem rather than causing it. One can have superposition and understand it too (possibly incorrectly, of course).

If the particle is *always* detected entering, traveling through and exiting a single slit in a double slit experiment then why wouldn’t you be able to understand the particle travels through a single slit even when you don’t detect it?

‘Redefining Dark Matter – Wave Instead Of Particle’

http://www.science20.com/news_articles/redefining_dark_matter_wave_instead_of_particle-139771

“This opens up the possibility that dark matter could be regarded as a very cold quantum fluid”

Particles of matter travel through a single slit even when you don’t detect them.

It is the associated wave in the dark matter which passes through both.

Sean, I take issue with your equation 2. You have the apparatus, a macro system, in a superposition of states. But isn’t this what Schrodinger’s thought experiment tells us is impossible? If you don’t believe the cat is simultaneously alive *and* dead when placed in the enclosure with the radioactive source is attached, then you cannot believe in macro superpositions. But equation 2, which represents a macro superposition, is critically important to your MWI. So I have to conclude that your argument for the MWI is fatally flawed. The only fallback position I see is to assert that macro superpositions exist but decohere quickly. How quickly? Probably hugely quicker than the time it takes to write equation 2. I am interested in your response to my comment.

Wow! I didn’t know about the pilot wave theory. That Wired article is fantastic.

Could this be the nature of that wave?: http://onlyspacetime.com . All the math can be skipped, it’s easy to understand.

TrustworthyWitness,

It just keeps getting better once you realize a moving particle has an associated physical wave.

For example, take wavefunction collapse. What this is actually referring to is the cohesion between the physical particle and its associated physical wave.

The stronger the particle is detected the more it loses its cohesion with its associated wave, the less it is guided by its physical wave, the more it continues on the trajectory it was traveling.

For an analogy, consider a double slit experiment performed with a boat. The boat travels through a single slit and the bow wave passes through both. As the bow wave exits both slits it alters the direction the boat travels as it exits a single slit.

Now, in order to detect the boat, buoys are placed at the exits to the slits. As the boat exits a single slit it gets knocked around by the buoys, loses its cohesion with its bow wave and continues on the trajectory it was traveling.

What is referred to as wave function collapse is now intuitively understood to be a particle losing its cohesion with its associated physical wave.

The question then is, what waves?

Dark matter is now understood to fill what would otherwise be considered to be empty space. It is also now understood to be a very cold quantum fluid that waves.

http://www.science20.com/news_articles/redefining_dark_matter_wave_instead_of_particle-139771

“This opens up the possibility that dark matter could be regarded as a very cold quantum fluid”

In a double slit experiment it is the dark matter that waves. Particles of matter move through and displace the dark matter; analogous to the bow wave of a boat.

Now is the really cool part. Dark matter displaced by the particles of matter which exist in it and move through it relates general relativity and quantum mechanics.

‘Hubble Finds Ghostly Ring of Dark Matter’

http://www.nasa.gov/mission_pages/hubble/news/dark_matter_ring_feature.html

“Astronomers using NASA’s Hubble Space Telescope got a first-hand view of how dark matter behaves during a titanic collision between two galaxy clusters. The wreck created a ripple of dark matter, which is somewhat similar to a ripple formed in a pond when a rock hits the water.”

The ‘pond’ consists of dark matter. The galaxy clusters are moving through and displacing the dark matter, analogous to the bow waves of two boats which pass by each other closely. The bow waves interact and create a ripple in the water. The ripple created when galaxy clusters collide is a dark matter displacement wave.

What ripples when galaxy clusters collide is what waves in a double slit experiment; the dark matter.

Folks, it’s a cute idea there that dark matter (I’ll call it “DM”, be careful since it can also mean “density matrix”, an important math representation in QM) can be the “carrier” of the waves causing interference. However, there are various problem: one, is that the density of DM varies from point to point (it’s not the dark energy which is apparently intrinsic to space-time itself). It has been mapped and we know this from the gravitational effects. But quantum experiments are supposed to illustrate a fundamental character of matter that should not turn out differently depending on just where your lab is.

Another is that photons, as part of the EM field, already have their own EM field that in the classical limit is what does the diffracting and interfering. To add DM waves or pilot waves to that is a great kludge and confusion. Can we imagine that photons really are little BB like pellets, how would a radiating atom emit them in a particular direction when the oscillation of their orbitals is supposed to spread out in all directions (well, a shaped distribution kind of like microphone sensitivity contours.) Indeed, there isn’t a clear explanation of what causes the PWs to e.g. be emitted from a nucleus along with a decay particle etc. But at least Bohmian mechanics is easier to try to make consistent with actual statistics than MWI, because the structure of the superpositions is inherently inimical to Born probabilities.

Folks, the Milky Way’s halo is not a clump of stuff traveling along with the Milky Way.

The Milky Way is moving through and displacing the dark matter. This is why the Milky Way’s halo is lopsided.

‘The Milky Way’s dark matter halo appears to be lopsided’

http://arxiv.org/abs/0903.3802

“The emerging picture of the asymmetric dark matter halo is supported by the \Lambda CDM halos formed in the cosmological N-body simulation.”

The Milky Way’s dark matter halo is lopsided due to the matter in the Milky Way moving through and displacing the dark matter.

This is the same physical phenomenon which is occurring in a double slit experiment.

Pingback: Sciencey Stuff You May Have Missed: Week 27 « Nerdist

@kashyap vasavada July 5, 2014 at 9:22 am

That is a very astute observation, thank you for your reply and for your insight on this tough problem. Would you be able to help me a little more? What do you suppose the velocity (speed) of an electron (beta) particle is relative to that of the proton during beta decay ? Also have we considered the other electromagnetic possibilities ? If the galaxy, stars, and matter falling into black holes are spinning then would that create an overall magnetic field for a galaxy? How would you go about falsifying this approach?

I have read a little about neutrons and they do not seem to interact with the electromagnetic field. Could neutrons be a source of dark matter? Are there other types of particles emitted during beta decay that also do not interact with the electromagnetic field? Would you be able to explain conservation of charge to me?

Pingback: Sciencey Stuff You May Have Missed: Week 27 | Science-Based Life

Neil, thisisthestuff or anyone else:

I have been reading about this so called “wavefunction” or is the correct spelling “wave function”? I’m not sure I understand how it works. What exactly is the debate? From what I have seen we have some ideal physical setup and there is a superposition (that means addition right?) of multiple values of this wave thingy that satisfies the Schrodinger equation. We may have one, two or perhaps even an endless number of these funky looking solutions to something called a differential equation.

Do we call each a state and is it also called an eigenstate? What is the observable that the equations above represent? Do we call the value of what we are observing an eigenvalue? If this wavefunction collapses then is it in only one of three possible states with the value associated to that particular state?

The argument I am presenting is what is considered to be wavefunction collapse is the loss if cohesive between the particle and its associated physical wave.

The stronger you detect the particle the more you destroy its cohesion with its associated wave, the less the particle is guided by its associated wave, the more the particle continues on the trajectory it was traveling.

The argument I am presenting is what is considered to be wavefunction collapse is the loss if cohesive between the particle and its associated physical wave.

The stronger you detect the particle the less the particle is guided by its associated wave, the more the particle continues on the trajectory it was traveling.

The great divide here is between instrumentalists and realists. For an instrumentalists physical models have no truth value, only utilitarian value for prediction. I call this the anti science position , I think with some justification. Science is not engineering where this kind of attitude would make sense. Science is not just about utility, in my opinion, it’s about understanding nature, even if it’s true that our scientific models are always destined to be very imperfect pictures of reality.

Deutsch sums it up nicely in the quote below, Whether or not there really are other worlds ( There is mostly certainly other histories if our cosmological models are correct) Everett got it right in terms of the relationship between the quantum and classical worlds. Bohr and company produced a lot of silly rubbish which inhibited real science with regard to understanding this relationship for over 30 years. Bohr is not the great hero in physics, he is the physics version of Lysenko , his disciples often using similar tactics to uphold the Copenhagen dogma, short of the Gulag. ( See “The Many Worlds of Hugh Everett III” by Peter Byrne)

____________________________________

” Everett was before his time, not in the sense that his theory was not timely- everybody should have adopted it in 1957, but they did not. Above all, the refusal to accept Everett is a retreat from scientific explanation. Throughout the 20 th century a great deal of harm was done in both physics and philosophy by the abdication of the original purpose of these fields: to explain the world. We got irretrievably bogged down in formalism and things were regarded as progress which are not explanatory, and the vacuum was filled by mysticism and religion and every kind of rubbish. Everett is important because he stood out against it, albeit unsuccessfully; but theories do not die and his theory will become the prevailing theory. With modifications.”

_____________________________

End Quote

@ Random dx/dt Tangent:

I commend you for your desire to understand modern physics. But there is lot of technical stuff you will have to understand. I would suggest starting from a simple book on modern physics from a public library. If there is a nearby campus where physics is taught, that will also help. I will try to answer some of the questions, but extensive discussions would need a blog by itself!

(1)In beta decay, nucleus (i) -> nucleus (f) + electron + antineutrino, the velocity of electron varies because it is a three body process involving energy and momentum conservation. It can be very high, close to velocity of light depending on the situation.

(2)For repulsion between galaxies, people have considered all sorts of electromagnetic processes. There are magnetic fields in galaxies, but they do not

help with repulsion. Dark energy is a kind of last resort!

(3) Neutrons do not have net charge. They do behave like tiny magnets though. People are doing extensive experiments with them. They are definitely not part of dark matter.

(4)Charge conservation and why charges come in integral multiples of electron charge is a great fundamental problem. There are some models based on grand unified theories. But they are not complete yet.

Hope this will help.

@Hoarse Whisperer :

I will try. But to understand it really, you will have to pick up a book on quantum theory or at least modern physics. There could be a system (say an atom) with the three separate energy levels you mention. Each of them could be an Eigen state i.e. solution of Schrodinger equation with that particular energy. You can prepare a state in the lab which is a superposition of the three states with arbitrary coefficients. According to the Born rule, the absolute square of the coefficients gives the probability of finding the energy value to be either E (1), E (2) or E (3). You will never measure energy to be any other than the three. After the measurement, the system has that particular energy and it is no longer a superposition. This is the so called collapse of wave function. Before measurement it obeys Schrodinger equation. In spite of 90 years’ debate, nobody understands what happens during the measurement. Sean is convinced that there are three universes or at least three branches of our universe where each case happens. I am not convinced. Actually I am in good company. Even as great physicist as Weinberg does not like any current interpretation. I understand he is looking for an alternative. Hope, this brief summary helps.

Bob, you can make normative statements about how science ought to be as much as you like, but if only its utility is logically/empirically justified and not its ability to produce a fundamental, ontological understanding of nature, I think we ought only operate under the assumption it is useful and not that it’s “true.” You could say I’m an empiricist like that.

Daniel Kerr says:

July 7, 2014 at 8:47 am

Bob, you can make normative statements about how science ought to be as much as you like, but if only its utility is logically/empirically justified and not its ability to produce a fundamental, ontological understanding of nature, I think we ought only operate under the assumption it is useful and not that it’s “true.” You could say I’m an empiricist like that.

__________________

I would never expect science to give any absolute ontological understanding, and all our knowledge to be reliable must of course be based on empirical inquiry and any logically supported consequences that result. But this doesn’t mean that it makes sense to take the position that our successful models have nothing to say about ontology. Bohr took physics down a very unproductive path with regard to the relationship between the quantum and classical worlds, it took over 30 years to recover from the incoherent viewpoint of Copenhagen.

Bob: I don’t think the point (“the point”, even if Bohr

et almade some overly strong generalizations about it) of CI was to treat all models as merely heuristic. Rather, the issue was the specific difficulty of finding a reasonable model or “picture” of things in the case of quantum phenomena. It really is difficult to reconcile what we really observe (which of course includes the specific probabilities, not just the bare fact of exclusive outcomes) with the wave function which ispostulatedas an explanatory bridge between actual observations. Remember that we can’t really detect the various alleged amplitudes of a single WF in space, as if we were verifying Maxwell’s equations. We think that “must be” the way it is in order to explain results, but that already works back from the need to reconcile extension-based interference based on ensemble detections with the finding of “the particle” (or photon …) as detected at a given spot later. They did the best they could, trying to picture the situation just doesn’t work out well for familiar reasons. That is the key, not ana prioriattitude about philosophy of science.You talk about recovering from their “incoherent viewpoint”, but the purported MWI alternative doesn’t even comport well with the most basic task of predicting the squared-amplitude probabilities we find. Note that the “structure” of the WF superpositions and distributions is inherently inconsistent with amplitude dependency. Various preposterous kludges are proposed such as “thicknesses” and various other widely-criticized dodges of the fundamental measure problem here. Nothing wrong with *trying*, it’s just a lousy attempt – and no more

productivethan CI, is it? (No particular alternative predictions.) It comes down to: the universe just doesn’t want to “play ball” you might say, however much certain people think it just “ought to” … (why should it?) – as noted in his own words by Richard Feynman.I agree the Copenhagen interpretation has done damage to physics understanding in general. Talk to any physics undergraduate or graduate student that does not work with quantum physics and the damage is self-evident. But I don’t think the damage originated in the Copenhagen view on the ontology of QM, but rather their intuitive compromises to make a seemingly non-ontological theory connect with our intuitive notion of physical ontology.

I agree that we should treat our models as if they’re true, as if they’re ontological. It makes sense to treat the Higgs Boson as a real object rather than an abstract concept organizing disjointed sensory data. But once we start talking about interpretations of quantum mechanics that imply a specific ontology and are unconstrained by empirical results, then I think it’s a more dangerous game to play. I’d rather assume the theory is completely non-ontological and choose the interpretation with the simplest ontology to it rather than make any normative statements on what that ontology should be (with the exception of ones that ought be physically consistent obviously).

I think the ensemble interpretation is the minimum ontology interpretation with some consistent histories-type approach connecting it to classical intuition. This is rejected by some because the wavefunction itself is not considered to be ontological, but given its clear probabilistic properties, I think that’s acceptable.

Response to Neil Bates

______________

I think you’re ignoring all the progress actually made in understanding the classical to quantum transition, once it wasn’t career suicide to question the holy writ from Denmark. I really wish everyone interested read Peter Bryne’s book ” The Many Worlds of Hugh Everett III” I have quotes some of this in this forum. Here’s a good quote.

______________________

From ” The Many Worlds of Hugh Everett III” by Peter Byrne

Hartle and Gell-Mann credited Everett with suggesting how to apply quantum mechanics to cosmology. They considered their “Decohering sets of Histories” theory as an extension of his work. Using the Feynman path integrals , they painted a picture of the initial conditions of the universe when it was quantum mechanical. Their method treats the Everett “worlds” as histories giving definite meaning to Everett branches. They assign probability weights to possible histories of the Universe , and importantly , include observers in the wave function.

Hartle declines to state whether or not he considers the branching histories outside the one we experience to be physically real, or purely computational. And he says that “predictions and tests of the theory are not affected by whether or not you take one view or the other.”

Everett , of course, settled for describing all of the branches as “equally real, ” which , given that our branch is real, would mean that all branches are real.

Conference participant, Jonathan J Halliwell of MIT, later wrote an article Scientific American ” Quantum Cosmology and the Creation of Universes” He explained that cosmologists own Everett a debt for opening the door to a completely quantum universe. The magazine ran photographs of the most important figures in the history of quantum cosmology; Schrodinger, Gamow, Wheeler, De Witt, Hawking, and Hugh Everett III , a student of Wheeler in the 1950’s at Princeton who solved the observer -observed problem with his many worlds interpretation.

End Quote

Daniel Kerr says:

Bob, you can make normative statements about how science ought to be as much as you like, but if only its utility is logically/empirically justified and not its ability to produce a fundamental, ontological understanding of nature, I think we ought only operate under the assumption it is useful and not that it’s “true.” You could say I’m an empiricist like that.

______________

Utility is fine for engineers but I don’t find it sufficient for Scientists. Of course we mustn’t be too naive about how close our models get to objective reality, but if Utility were really the only criteria , a lot of science wouldn’t be done.

Bob, that apparent “progress” about the transition really isn’t. I and others have gone over why the decoherence argument is fallacious (again, briefly: from wrongly equating optical-type “interference” with broadly causal “interference,” and the pivotal density matrix is confusingly

pre-loaded with measurement-based probabilities.) And if there is no alternative prediction (other than the implied *wrong* prediction of other than the actual Born statistics), what’s the point? Try reading up on the various critiques. The cosmology issue has no clear grounding, it’s all speculative as far as the connection to various interpretations goes. Finally, saying “Everett … solved the observer -observed problem with his many worlds interpretation” is unwarranted hagiography about a deservedly controversial claim.Bob, I believe we should strive for it and a good theory should aim to achieve it, but we shouldn’t treat our interpretations as if they’ve reached such a status unless they do. If I didn’t believe this was a good goal I wouldn’t be concerned with interpretations now, would I? Haha, so obviously I agree with you in spirit.

As for many worlds, treating each world ontologically “real” doesn’t make any sense to me when we have the formalism of modal logic to categorize such worlds. However, I’m still unclear whether many world proponents argue: 1) Each universe in the multiverse has a defined state which contributes to the overall multiverse superposition, or 2) Each universe in the multiverse samples dynamical values from the distribution described by the multiverse wavefunction. The two arguments are really quite different as the former has the preferred basis problem while the latter does not. The latter is quite a different theory though as the states would have to be emergent from the dynamical values that index each universe.

Sean Carroll says: “If the particle can be in a superposition of two states, then so can the apparatus.”

Why?

If something is a superposition of two possibilities, then so can be something else? Where is the logic in that? Maybe, sometimes, indeed, depending. But not in general. Moreover, we are talking here about things of completely different natures: the particle is being measured. And it’s measured by an “apparatus”.

One is Quantum, the particle. The other is classical, the “apparatus”. In other words, Sean insists that Quantum = Classical.

Sorry, Sean. Science comes from the ability of scindere, to cut in two, to make distinctions.

An interesting aside: the philosopher Heidegger, maybe inspired by some of the less wise, contemporaneous statements of Bohr and his followers, insisted that the distinction between “subject” and “object” be eradicated. Unsurprisingly, he soon became a major figure of Nazism, where he was able to apply further lack of distinctions.

It seems to me that confusing the subject, the “apparatus”, and the object, the particle, is a particular case of the same mistake.

Neil Bates writes

Bob, that apparent “progress” about the transition really isn’t. I and others have gone over why the decoherence argument is fallacious (again, briefly: from wrongly equating optical-type “interference” with broadly causal “interference,” and the pivotal density matrix is confusingly pre-loaded with measurement-based probabilities.)

_____________

Sorry but once the claim is made that the empirically supported theory of Decoherence is fallacious, there’s really nothing left to discuss.

Bob, it’s not that decoherence doesn’t happen or that it isn’t sufficient to explain how classical probabilities result from quantum probabilities, it’s that it isn’t necessary. See this paper for example: http://link.springer.com/article/10.1007/s10701-008-9242-0#page-1

@kashyap vasavada

July 7, 2014 at 4:52 am

1) How confidant are you that we have eliminated an electromagnetic source as a possible mechanism for repulsion between galaxies? We do have radio telescopes but have we analyzed the entire electromagnetic spectrum in fine detail?

2) I have read about pulsars, however, is it possible to observe a single neutron in space? Would you be able to explain angular resolution to me?

3) Why is everyone chasing this grand unified theory? I understand that previous forces have been unified in the past, but I do not drive to my local college campus physics library using my rear view mirror.

4) Why is there more matter than antimatter? If there is an equal chance between matter and antimatter then why would one type exist instead of a universe of only radiation and no matter or antimatter?

Many worlds may be right…..but anyone who subscribes to the Copenhagen interpretation needs to think about the moment of conception. No one is present, there is no observer, the egg and the sperm are so tiny, there are millions of sperm and only one gets inside the egg….and from then on, things unfold without an observer present and has been that way from the dawn of time and any of the technology of today. So, where is the wave function collapse happening and how?

The point being no one has any clue something has happened for quite some time…days, weeks! And whatever happened inside the womb developed through cell division one step at a time in an extremely complicated process from specific instructions ….. Things are on autopilot, except for nourishment! So, somehow I really doubt that any other quantum mechanics interpretation other than MW can allow for this and even MW could face some legitimate questions in this situation! This despite the fact that DNA itself is now known to be rooted in complex quantum effects in the double helix!

I have one question which has alrady been asked if how do m,as energy conservation laws fit into this – where does the mass come from each time a universe splits? Is anybody willing to give a short idiot level answer to this please?

What if a subsystem of the universe makes a phase transition ? There ain’t no superposition of phases (see superselection rules, etc.)

And what if any observational act is related to a phase transition ? (There is even a brain model based on this idea).

Bob Zanelli, you wrote:

Sorry but once the claim is made that the empirically supported theory of Decoherence is fallacious, there’s really nothing left to discuss.Yes, there is: whether you knew what the claim was in the first place. I think you are confusing whether decoherence exists and the relational effect it has on outcomes per se, with the

argumentof the decoherence interpretation, that purports to explain that, and why, decoherence leads to exclusive outcomes and states that can’t interact at all with each other. But the argument does not accomplish the latter task, that is what I meant (which should have been clear from both context and details.) Advice: if you think someone means to deny actual established facts, consider that you misunderstand them instead. (The exception being MWI supporters who claim there are states existing that we can’t possibly “find”.)@Random dx/dt tangent: As a retired physics professor I would of course enjoy explaining whatever physics I know to inquisitive people. But there is only a limited amount one can explain without equations and without a chalk board. So the best recourse for you may be the nearby campus.

Answer to your question about why there is more matter than antimatter is largely unknown, although there are models based on CP violation. In fact if someone can figure this out, there is a guaranteed Nobel prize waiting for him/her.

“Answer to your question about why there is more matter than antimatter is largely unknown, although there are models based on CP violation. In fact if someone can figure this out, there is a guaranteed Nobel prize waiting for him/her.”

Our universe is a larger version of a black hole polar jet.

Matter is moving outward and away from the Universal jet emission point.

There is directionality to matter in the universe which refutes the big bang and is evidence of the universal jet.

There is a spin about a preferred axis which refutes the big bang and is evidence of the universal jet.

Dark energy is dark matter continuously emitted into the universal jet.

It’s not the big bang it’s the big ongoing.

Everett told his son to throw him out in the garbage when he dies. At least we know that is true.

But what would also be true is there is an infinite amount of Carrolls that, at this very moment would look at the same world we live in and then come think multiverse is absurd–making this Carrolls opinion pointless. If thats the price you need to pay to support your worldview, I say get out the hefty bags.

The MWI makes sense in many ways, especially given the notion of a pointer basis. But it seems like the notion of probability is not entirely well-defined either. Basically, you need to have some kind of measure on the space of universes and that seems to be quantum++ (if you will). I am willing to buy quantum++, but it would seem to require an extra assumption with regard to measurements. Once that happens, it is not clear why MWI is better than, say, assuming that the universe is fundamentally probabilistic and that these probabilities are naturally complex. Then wave function collapse is just the process by which a “35% chance a spin up photon” becomes “that photon was either spin up or spin down.”

To people that actually know about this: is there a natural way to interpret probabilities without additional assumptions in MWI?

Cmt, others can speak for themselves, but here is my take on your question: the proper Born statistics surely do not come naturally out of MWI. It’s not a neutral “mystery” for it. The structure of continuing superpositions is inherently inimical to frequentist Born probabilities. The “structure” is the same regardless of the amplitudes, but the amplitudes squared are the actual basis for probabilities.

Well, you can claim that wrong stats aren’t the “necessary” outcomes of that “interpretation” (and, with those implications, it really counts as a “theory” after all ….), but they are the straightforward suggested outcome in advance of twiddling with contrivances. Some of the latter are clunky and baffling, like pretending that the more frequent stream is “thicker” or “more real”, whatever that means. If MWI needs all that trouble and still can’t make a decent case for the most fundamental trait of quantum statistics, then it should be considered suspect until a clear and convincing case can be made that it does really predict BPs.

BTW I think I had a cookie glitch, accounting for the alternative avacon (that may be a silly Neilogism; avatar-icon, but thought I’d try it out. Note, it is also the name of a company, a very interesting one.)

Dr. Carroll,

I’ve read and much enjoyed your “From Eternity to Here” and I found myself agreeing (in a presumptuous layperson’s way) with your conclusions about Many-

Worlds. But after reading this post, and today observing a tree in a fairly strong wind with the many leaves twisting this way and that, I wonder if it is necessary for a new world to be created for every alternate possible twist of the many leaves? If so then the number of worlds must be infinite– and infinite the number of particles–since every “occurrence” could be considered as “observed”.

Here’s another complication: “the wave function” we are able to project the evolution of, is itself based on a previous measurement/collapse/preparation event. So then, what is the entire “original” WF? If we consider a new sub-WF for each apparent “observation”, do we then have “sub-sets” of associated WFs, and doesn’t that spoil both the idea of the superfluity of “measurements” as well as the simplicity of a big set of superpositions? (And, you can break down a wave into many different choices of components anyway, compare vectors.) I’m sure someone has offered answers, what and how good are they?

Neil, I think all this tells you is that you cannot necessarily exactly infer the quantum state that is “prepared” in a past big bang event. It makes sense, if you only have the projected/filtered data, you cannot infer the larger data set from which it originates.

I’m not sure how important it is that you need to know that wavefunction unless you’re interested in replicating the universe’s conception down to the quantum precision of its “setup,” if that’s even a sensible concept.

Dan, being able to know that wave function per se is not my essential point. (And as you know, we can’t easily “know a WF” anyway, given the practical measurement problems, aside from “what’s really going on.”) What I mean is: the situations that we consider to be generative of “new” WFs, are themselves the result (or can be) of previous quantum experiments. So, consider “the” current WF, about which we conjecture: “will the different components of the superposition, continue to evolve separately despite a measurement ‘seeming’ to select out one of them?” (Never mind that the argument for why lack of coherence would actually keep them causally separated, wrongly conflates having an evidentiary

patternof amplitudes, versus not; with “interaction” in the general sense ….) Well, that WF should itself already generated as one of the possible *wave functions* that could be produced under different circumstances. So then, again: do we have a sort of sub-grouping of superposition components? It would be a sort of infinite “nesting” of superpositions of superpositions, wouldn’t it? How does that work? Indeed, is this intelligible and integrable into quantum theory?(PS, hint: I’m not looking to innocently “find out” things for the sake of knowledge, I am trying to catch the MWI in problems and to imply it’s not such an elegant idea, after all.)

Neil, ah, I see, you’re tackling how MWI would handle constructing the initial universal wavefunction, which must exist whether measurable by inference or not.

In terms of the structure implied by such an analysis: Since any future state would have to be a time evolution of some initial state accounted for in the initial state representation, then you would represent this future state as an infinite product of projection operators. The problem is picking the basis of the projection operators so I think this is the preferred basis problem manifesting in yet another way. The infinity of the nesting is not the problem, a convergent product of mappings can be defined for a set of projection operators, but the convergence depends on this basis. One could backwards solve which kind of bases would be acceptable, I wonder if such a study has been carried out in the context of the preferred basis problem. My intuition says that the only acceptable basis for such operators should belong to the subset of the Hilbert Space that is Lp integrable.

We will start with the Schrodinger equation:

The wavefunction that satisfies the above equation is:

Analyze the LHS of the equation which is a first derivative with respect to time:

Now remove the imaginary number from the Schrodinger equation and the wavefunction to provide a real diffusion equation:

Again take the derivative on the LHS of the diffusion equation:

There would be no debate about the interpretation if the Schrodinger equation and the wavefunction did not have imaginary numbers. We would also not be able to deny that time is real and an observable just like energy, momentum and spin. The core nature of the universe is random. When we measure an observable the equation must give us a real number and those numbers are described by probability theory. The results of the equation are identical with or without the imaginary number. Those who favor the Copenhagen interpretation cannot argue that it becomes real only when measured and when the imaginary numbers vanish by the born rule. It is sad that everyone accepts statistical mechanics but no one will accept calling quantum mechanics a diffusion equation with an associated diffusion function .

@Boltzmann: What you are suggesting, Schrodinger eq. with real functions (without i) has been tried numerous times in the last 100 years or so. It does not work! Quantum mechanics is a subtle inter play of complex functions to explain wave particle duality. Phases in the complex functions play very crucial role in the interference phenomenon which is the central issue in quantum mechanics. It is difficult to explain all this in a comment. But you may want to go through a book on quantum mechanics and try if it works with real functions. I can assure you it will not.

Physical theories should not have an assumption as their base foundation.

In MWI it is assumed the particle does not travel through a single slit when it is not detected.

There is physical evidence the particle always travels through a single slit, and that evidence is it is always detected traveling through a single slit.

There is physical evidence dark matter waves. It ripples when galaxy clusters collide.

de Broglie wave mechanics and double solution theory is supported by the physical evidence the particle always travels through a single slit and the physical evidence the dark matter waves.

In de Broglie wave mechanics and double solution theory the particle travels a well defined path through a single slit and the associated wave in the dark matter passes through both.

@kashyap vasavada: Is that worth a Nobel prize too? If only there were some way to do interference with the real numbers. Oh well maybe we will figure it out after an additional six years. I shouldn’t have called myself Boltzmann either because appealing to authority is a fallacious trick in an argument, my apologies. Thank you for reminding me to read books. I like books and I will keep my opinions and derivations to myself.

@Bored: Sorry. I did not mean to belittle your effort in any sense. In science, every one should have a right to put forward his/her ideas. Only requirement is that one should be aware of what has been tried before. All I wanted to point out was that such things have been tried before and have not worked and complex algebra plays an important role in quantum mechanics. As a matter of fact some very prominent physicists such as Weinberg and ‘t Hooft may be looking for alternatives to quantum mechanics.

@kashyap vasavanda: What a bummer that it has been tried before. In your opinion how do we order the complex numbers to provide us with the real numbers needed to make sense of observables? Do we take the inner product of two orthogonal wavefunctions and then take the square root? Isn’t that just squaring and then taking the square root of imaginary numbers? I can write it out in LaTeX if you would like. I deserve being belittled because I am trying to pick a fight. I respect Weinberg and t’Hooft but they shouldn’t be the only ones who get to have all the fun. What is your approach to doing quantum mechanics with real numbers?

@kashyap vasavanda: I was hoping that you would notice that a true wave equation is a second order derivative with respect to time. This allows us to take the derivative of an exponential complex wavefunction twice to cancel out the i’s and in this case we do not have to consider the ontological nature of imaginary numbers. Combinatorics and probability theory are also able to create functions which make valid predictions with real observables by using real numbers. Combinatorics and probability theory also tells us why the observables have the probability values/expectation values that they do. When we use a diffusion equation which is first order with respect to time then the probability amplitudes may be complex because we insist on using a complex exponential diffusion function. However, we do a lot of unnecessary work to map the observables and their probability amplitudes to the real numbers. We also do not need the imaginary numbers to explain wave particle dualitybecause that is not the fundamental problem. The measurement problem is why do we measure a particle once and find it in a particular state and then if a second measurement is performed it is either in the same state or it has the possibility to evolve into a different state. A solution to the measurement problem is an entropy argument which we are able to derive once we accept the postulates of the ensemble interpretation and that time is a real observable.

I have read about a science respectability checklist. However, my philosophical crisis is that publishing appears to be a sadomasochist initiation ritual more than a valid way of doing science. Yet to get paid to do science, mathematics and philosophy then I must publish. My salary or status within the community will then be correlated with the number of references my publications receive. If this post has at least 3 likes and or 3 dislikes then that should not change your analysis of my arguments. I ask annoying questions, the Socratic method, to determine what people in a forum understand and what they don’t understand. I troll because once people think I am a boorish rube then they are more likely to expose their own logical fallacies. Thank you for engaging in a dialogue.

I thought there are many labs now researching dynamical collapse, and that they can slow it down and even reverse it a little. What do you mean it hasn’t been witnessed?

Anyway, MWI does not follow trivially from basic quantum mechanics. It includes the extra assumption that the state consists entirely of a tree structure of paths through state space, which evolve indefinitely with effectively classical behavior and negligible interference with the rest of the state — using only deterministic linear dynamics.

@Bored, Dat is Jammer and Geen Probleem:

I responded to the original question because I did find it interesting, although it has been agreed upon for several decades that complex algebra is the correct way to do quantum mechanics and its extension quantum field theory. Although wave equations are second order, one starts with Schrodinger eq, (first order in time) and Dirac eq. (1st order in both time and space). I accept this conclusion, but if you want to proceed with real functions , then surely it is your prerogative, As for me I am more interested in the debates about interpretations. Surely there are problems with quantum mechanics such as quantum gravity. But they have nothing to do with the fact that we are using complex numbers. If you are interested, importance of complex algebra is explained in a semi popular book by a famous mathematical physicist Penrose “Shadows of the mind (Ch. 5)”. According to my understanding, in optics and electricity magnetism (also electrical engineering) complex numbers are just nice tricks to make algebra easier. But in quantum mechanics they play essential role.

Geen Probleem, this harping on complex numbers is just silly. You could represent the whole algebra of unitary quantum mechanics in a real valued spinor representation which is isomorphic to the Hilbert space over the field of complex numbers. The complex numbers do not belong to the theory, they merely represent the operations in it. Observables are not related to them. It is most convenient to have a complex representation, but you don’t have to. Good luck talking to everybody else using a needlessly complicated representation just to avoid “i.”

When I try to conceptualize MWI, I find it easier to think of an infinity of superpositions rather than an infinity of universes. The “doubling event” portrayed in most cartoon representations of MWI seems to be a big hang up for a lot of folks, both expert and amateur (I’m definitely in the latter category). When does it happen exactly? Is energy/mass really doubling? Etc. Is there anything specifically wrong with thinking about this approach as many superpositions rather than many universes?

Of the “four versions of QM” represented in the earlier post, it seems to me that MWI and QBism have the most in common. The “collapse” and “pilot wave” models both predict something not already in the equation, which is testable at least in principle. Can the same be said for MWI versus QBism? They both purport to take QM at face value, so I guess the answer might be no. Funny that Sean appears to dislike QBism more than the other two – is that because it’s the best contender to MWI? If it’s just personal preference, it’s hard for me to say whether I prefer that “everything that can exist does exist” (MWI) or “everything is just relational” (QBism). But those two things seem like they might be the same thing. Infinity of universes versus infinity of relationships. What’s the difference?

@Collin237: Are you saying that if we can slow down a collapse and reverse it then we would be altering the evolution of wavefunctions in other universes? All joking aside many have brought up an energy argument against the universe diverging at each quantum event. Would creating a new universe require creating additional energy? I would think it would take a lot of energy to copy all the information inside of each universe. Would that mean rethinking the laws of thermodynamics if we accept the many worlds interpretation? I understand the appeal of the many worlds interpretation because it is a way to keep a the wavefunction continuously evolving without ever collapsing. However, are we be able to falsify this interpretation?

@kashyap vasavada: I know using complex algebra in quantum physics has been agreed upon by everyone except for a few holdouts. Is there more than one way to solve a physics problem or only one exact way? The wisdom of the crowd can be a great shortcut to quickly find an answer or an interesting physics forum. Determining if that answer is valid can be difficult. If you are able to do quantum mechanics with real numbers then does that change the way you think about nature? I know that my thought process changes when I solve a problem in a different way, but that is my anecdotal experience.

Is the essential role of complex numbers to provide the time evolution of the wavefunction between measurements and the interference of wavefunctions? Calling a diffusion equation a wave equation must be incredibly confusing for physics undergrads too. The Copenhagen interpretation tells us that the wavefunction is not real until it is measured. Does that mean the wavefunction is not real because it is just a statistical ensemble of possible states or that the wavefunction becomes real once the imaginary numbers are mapped to the real numbers? I know this is semantics but I would appreciate the clarification.

I assume the wavefunction describes the probability of an outcome and that between measurements a particle may be switching states randomly. I also assume that in the act of measuring the particle it may or may not diffuse into a different state afterwards. Is the purpose of a waveguide to ensure that the energy/power of a wave does not diffuse into the environment?

Let’s assume you are unfortunate enough to have me as physics student. You assign your quantum physics class a weekly homework problem set. However, you are too busy writing grant proposals to grade the homeworks. You decide to distribute the quantum homeworks back to the class to be graded. You know that I am a crafty physics cheater and you also want to make sure that no one in the class receives their own homework to grade. What is the probability that no student grades their own homework?

Quantum gravity is a tough problem. Knowing the temperature or change in velocity of a single particle must be difficult to do experimentally especially with an uncertain velocity observable. The acceleration vectors of identical particles may interfere in weird ways or produce a unique acceleration vector field . What was the difference between special and general relativity?

@Daniel Kerr: I agree that the complex numbers do not belong to the theory, however, they can be useful. I believe I saw a post earlier from you on lie algebra and “real valued spinor representation which is isomorphic to the Hilbert space over the field of complex numbers” would make a great band name. The concept of an infinite dimensional Hilbert space represented by a finite set of basis vectors which were derived from the boundary conditions in our ideal experimental setup is a difficult concept for most to grasp .

Jargon, I wasn’t arguing against the use of the complex numbers, but rather that they are the natural language to express quantum in. But they are just that, a language. They represent physics, the numbers themselves don’t have any physical bearing. And yes, lol, that would be a great band name, completely agreed. It is interesting that it’s so hard to grasp this kind of mathematical structure. The consequences of non-commutativity seem to be confusing in general.

Well, AFAICT no one yet mentioned one of the weirdest possible consequences of MWI: “quantum immortality” (or quantum [frustrated] suicide: Quantum Suicide and Immortality) Instead of actually ridding us of the “nuisance” of Schrödinger’s cat, MWI arguably just makes sure he never dies. Continued superpositions justify a weird sort of “immortality” for those components of a conscious being destined to survive even the rarest of escapes from near-certain death. xxx So, sit in front of a quantum machine gun with 99.99% chance of firing a real bullet into your brain each second. Well, “you” are going to be one of “the minds” that survive, so you keep on hearing “click, click, click …” (Only versions who aren’t shot can pass judgment, compare to anthropic reasoning etc.) At least, until some inevitable (and miserable) inner decay sets in. Maybe. So, a concept meant to save the world from needing minds to make things happen, ends up perhaps making sure nothing can stop those minds from continuing to happen…

@SpinMeDown

As I understand it, extensives such as energy density are formulated in quantum mechanics not as a ratio E/V (which would be impossible since theoretically the volume of a particle is infinite) but as a bilinear (psi*)E(psi). In a unitary dynamic it loses no generality to take the norm, int (psi*)(psi) dx^3 = to be constantly 1. However, a measurement produces two or more outcomes whose norms by that definition add to 1.

In the Copenhagen Interpretation, psi is your knowledge, and you update your knowledge when you see which outcome happens, so the norm would be scaled up to 1 again.

In a Collapse Dynamics Interpretation, whichever outcome happens would take up the norm from the others, and the norm would build up to 1.

In the Many Worlds Interpretation, no such adjustment would occur. The energy before the measurement would be divvied up into the various outcomes, in proportion to probability. So the energy of what you consider to be the universe is decreasing, draining away into the rest of the multiverse.

I think this would have to mean that your universe is becoming more coarse-grained and losing information, so there wouldn’t necessarily be any information copied. However, there would be another problem with thermodynamics. The fuzziness of molecular motion, allowed by other interpretations (except Bohm), could not occur during observation, because a molecule’s paths would be separated from each other. So every material substance would become more like an ideal gas the more closely it’s observed!

That brings up another point — the Uncertainty Principle. In MWI it would hold only in aggregate throughout the multiverse. Each individual universe would have much less uncertainty, because the decisions made by observers would select among the possible position and momentum values.

I suppose there’s a way to compensate for these anomalies, to set an ad-hoc to catch an ad-hoc 😉 But to be fair, just about everything we know about physics comes rather close to a falsification.

@Serious-Gerlach: It seems that you and some other readers may be believing that the problem of interpretation will disappear when you use real functions. I seriously doubt if that is the case. The basic problem of quantum mechanics is the wave-particle duality which is an experimental fact not present in classical physics. No matter what mathematical language you use, you are not going to get around that fact. As a matter of fact some pragmatic physicists think that no interpretation is necessary!! Such a fantastic agreement of theory with experiment is unparalled in any branch of science. They say that, it is all one should demand. not that I agree with this viewpoint. BTW people are not calling diffusion equation a wave equation. What makes Schrodinger equation different from diffusion equation is precisely that “i”! You might speculate that it may be kind of diffusion in some imaginary Hilbert space!! I should also mention that Schrodinger himself tried an equation which has second order derivatives in time and space. That did not give the correct spectrum for Hydrogen atom. That is why he switched to first order in time which gave correct spectrum of hydrogen atom and was consistent with wave-particle duality in non-relativistic case. The equation with second order derivatives in space and time came back as Klein Gordon equation in quantum field theory. Relativistic version of Schrodinger equation which has both time and space derivatives of first order is the Dirac equation, still complex. But if you feel that the problem is the “i” in Schrodinger equation, then surely try the alternatives. As they say, proof of pudding is in eating!! I would caution that the present formalism has been so much successful in predicting results of experiments in physics and chemistry that you are facing a very tall order.

I would also like to add to kashyap’s point that the Schrodinger equation is NOT the wave equation in the sense of Partial Differential Equations. The Wave Equation has no dispersion, the phase velocity is equal to the group velocity. This is not true for quantum mechanics, for wave functions to encode the state’s momentum and also retain Galilei invariance, the waves must be dispersive. Without the i, the probability of a stationary state would be steadily decreasing in time and the continuity equation of probability would not hold. So you’re stuck with this dispersive “i” containing diffusion equation if you want the Born rule to hold.

@Daniel Kerr : Good point. Thanks for bringing it out. As for the debate about real vs complex algebra, I would still maintain that it is not just mathematical convenience(like in the case of electricity-magnetism) but something deeply necessary to bring out wave-particle duality. But otherwise we agree pretty much.

Kashyap, the algebra representing quantum mechanics makes no reference to numbers or vector fields at all. They are just representations of Lie Algebras, complex numbers are simply the most natural way to encode this algebra, but the numeric field used doesn’t contain any intrinsic connection to this algebra. As I said, you could replace C with the set of 2×2 matrices over R, so that each vector component is such a matrix. There are plenty of ways to keep it real while still staying true to the algebra.

@Daniel Kerr: I see your point now. I think, we may be interpreting the question of these readers about using a real wave function in a different way. The way I interpreted was that they wanted to have Schrodinger equation without “i” (to make it look like diffusion eq.) and still keep psi as a one component real object and no complex algebra. That will not work for sure. What you are suggesting is to replace psi by a two component spinor, with the second component replacing the usual imaginary part. “i” also must be replaced by a real two component column vector and multiplication should be appropriately defined to make i^2 =-1. Everything will be two dimensional. It will be clumsy but it will work!! Is this a correct and simple way of presenting your argument in a physicist’s way? I agree,just using representation theory without using vectors,you can obtain the same result. But that may be mathematician’s way of looking at quantum mechanics! I think, you will agree that complex algebra is still an easy way of doing quantum mechanics.

Kashyap, I agree that using the diffusion equation (without i) will not work at all for the reasons you and I brought up before. And yes, exactly, a 2-spinor to represent complex algebra is exactly what I meant before. It’s sloppy, but it eliminates i from the theory. Schrodinger’s equation than becomes a system of coupled PDEs. I completely agree that the complex algebra is the better way. For those who dote on it and think somehow “i” is some kind of abomination in physical theory, there are other options which highlight how it’s an element of the language of quantum mechanics and not the theory. Obviously observables being real and our operators being Hermitian guarantee that but for those still unconvinced, that’s an explicit way of showing it.

@Daniel Kerr: I’m certain the forum is not ready for a discussion about a complete set of commuting observables.

@Charlie: Is it really an infinite superposition or is the wavefunction just a grand canonical ensemble of a finite number of possible states? I understand why Sean likes the MWI . I agree with Sean that the Copenhagen interpretation is ugly and I respect Everett for challenging Bohr. I just wish Everett challenged the Danish physicist in Copenhagen right in front of Kierkegaard’s deteriorating statue . A philosophy accepting dual states at the same time is difficult to reconcile with formal logic. I can understand if we accept the postulate that angular momentum is conserved then entanglement makes sense as long as no useful information is transferred faster than the speed of light. If information was transferred faster than the speed of light then that would violate the postulates of relativity. If something is true and false at the same time then how do we falsify a theory ? With a complete set of commuting observables there may only be a few relationships but it may still lead to a complicated system if there are a large number of objects in our system. I think a mechanical clockwork universe is boring and a universe that likes to gamble is more exciting.

@Colin237: I find it odd that quantum mechanics was created to study a thermodynamics problem. The ultraviolet catastrophe was solved when Planck quantized energy. The classical electromagnetic wave theory predicted the E/V of a blackbody would go to infinity. By switching from integrals to sums we get a finite solution for the energy density. I’m not sure if the volume of a particle is infinite all I know is that as the photons have more energy and momentum when their quantized wavelength is smaller.

We normalize the wavefunction so that the sum of the probability amplitudes equals one. We do this because something must happen and the particle must be measured in some definite state we just don’t know which state the particle will be in when we measure it. Saying that we have incomplete knowledge of the system or lose information is almost the right answer except that something strange happens when there is an interaction with something else. When the wavefunction collapses to a definite state by measurement then the particle will stay in that state as long as we keep adding energy or prevent energy from escaping. Will the wavefunction start to spread out again when I stop measuring it? If I take an identical particle represented by the same wavefunction and collapse it to a different state will it behave in the same way around a different value?

There is something funky going on with measurement and the Copenhagen interpretation acknowledges that the wavefunction does collapses after a measurement. What bothers me is that they tell us it is in every energy state at once which again makes me question if they take the laws of thermodynamics seriously.

If we are measuring observables that commute then there is no uncertainty in our measurements other than the experimental uncertainty. From what I understand that means that the other possibilities in our set of observables occurs in different universes according to the MWI. If you are trying to catch me in a logical trap then I give up you win. If you’re into the pilot wave theory then try not to get too crazy with the fourier analysis. I suppose you can use the group velocity to turn waves into particles but that seems like a lengthy derivation. It is probably easier to picture the wave as a Gaussian probability distribution for photons.

@kashyap vasavada: Call me crazy but I think you’re picking up on what I’m putting down.

Perhaps I’m wrong, but doesn’t the MWI essentially posit that the universe fissions into essentially infinite slightly altered copies of itself every plank unit. All this to explain measurements? Seems a bit much (irony) when there are many other alternatives. Borges would be proud.

@Daniel Kerr : From your comments, you sound like a good mathematician. I know a former colleague who is not only a good physicist, but also a good mathematician. Horia Petrache has published the following paper on hypercomplex numbers and group theory. You may enjoy reading that.

http://www.mdpi.com/2073-8994/6/3/578.

Cheers.

Kashyap, that’s an interesting paper, the block matrix representation of complex numbers in the Klein Group section was the construction I was referring to for quantum mechanics in a previous comment. The paper describes a coset construction and certainly outlines a good algorithm for determining other such number systems. It would be interesting to start with symmetry arguments for how an overall group should look and derive that these number systems are indeed those symmetry groups.

Complex numbers as being the field for the unitary representations of the Galielei/Lorentz group is usually argued from continuous symmetry requirements in time and space. In terms of the numbers, it would be interesting to see how this continuity requirement explicitly necessitates the group structure complex numbers satisfy. I suppose this could be done by requiring every such matrix representation of the time evolution operator to have every n root of it be well defined. I suppose that algebra necessitates the group structure of the n+2 coset construction outlined in your colleague’s paper.

@Daniel Kerr : Thanks. It is a nice pedagogical paper. He was saying that mathematicians probably know it, but for physicists, even some theoretical physicists, it may be new. I will forward your comments to him. BTW if you have some specific ideas please feel free to send them to him at his e-mail address given in the paper. He will appreciate it. For work actually he does experimental biophysics , but he is surprisingly good at math!

Kashyap, that’s funny as I do experimental biophysics as well. Theory is just what I think about for fun, it seems much harder to be paid to do it though!

Your friend is right that it’s new to physicists mostly, the level of group theory he used is usually covered in the first 4 weeks of an introductory group theory course. I value group theory a lot myself as I see physics as applied group theory. I personally believe that good interpretations of quantum mechanics will be based in a group theoretic language.

Ultimately the problem of the correct interpretation of quantum mechanics comes down to the justification of using a unitary representation of lie groups. Adjoint/coadjoint representations (Lagrange/Hamiltonian) are far more natural for groups. Doing this on the Galilei or Poincare groups gets you classical mechanics or special relatively respectively. Only by imposing that representations be unitary do you get quantum mechanics.

this is an excellent article. two notes:

(1) it is important to distingush between ‘multiverse’ and ‘many-worlds'; most people get the two concepts confused.

(2) the statement that “The formalism predicts that there are many worlds, so we choose to accept that. This means that the part of reality we experience is an indescribably thin slice of the entire picture, but so be it. Our job as scientists is to formulate the best possible description of the world as it is, not to force the world to bend to our pre-conceptions.” is your opinion vbut not every scientist agrees. some think the job of a scientist is to formulate (understand)vthe best possible description of the world we live in -our reality.

a not inappropriate quote from

Robin Williams:

“Reality. What a concept.”

@Richard J. Gaylord

“This means that the part of reality we experience is an indescribably thin slice of the entire picture.”

I have no problem in being in an indescribably thin slice of the picture (a small part of the many worlds). What I think as irrational is asking me to move from one slice to another as I go on collecting data in a quantum experiment! This is completely arbitrary since it is up to me to stop the experiment any time I wish!

Pingback: Why Probability in Quantum Mechanics is Given by the Wave Function Squared | Sean Carroll

Sean, Nice to meet you!

I am following up you recently and saw all your available presentations in youtube. Let me say that you are a very inspiring speaker. I feel comfortable with practically everything I heard from, but this one about MW. It seems to me that this fully believe in MW from your part is some kind of over rational twist. As far as I can understand, you are only saying that the possibility of MW existence is right there in the matter with no other assumption. So basically you are saying (and following up your mind process in other lectures) that there is no reason why NOT to believe that MW COULD exists. This is far away to state that MW exists. I’m correct Sean?