One of the most profound and mysterious principles in all of physics is the Born Rule, named after Max Born. In quantum mechanics, particles don’t have classical properties like “position” or “momentum”; rather, there is a wave function that assigns a (complex) number, called the “amplitude,” to each possible measurement outcome. The Born Rule is then very simple: it says that the probability of obtaining any possible measurement outcome is equal to the square of the corresponding amplitude. (The wave function is just the set of all the amplitudes.)

**Born Rule:**

The Born Rule is certainly correct, as far as all of our experimental efforts have been able to discern. But why? Born himself kind of stumbled onto his Rule. Here is an excerpt from his 1926 paper:

That’s right. Born’s paper was rejected at first, and when it was later accepted by another journal, he didn’t even get the Born Rule right. At first he said the probability was equal to the amplitude, and only in an added footnote did he correct it to being the amplitude squared. And a good thing, too, since amplitudes can be negative or even imaginary!

The status of the Born Rule depends greatly on one’s preferred formulation of quantum mechanics. When we teach quantum mechanics to undergraduate physics majors, we generally give them a list of postulates that goes something like this:

- Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
- Wave functions evolve in time according to the Schrödinger equation.
- The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured.
- The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue.
- After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities).

It’s an ungainly mess, we all agree. You see that the Born Rule is simply postulated right there, as #4. Perhaps we can do better.

Of course we can do better, since “textbook quantum mechanics” is an embarrassment. There are other formulations, and you know that my own favorite is Everettian (“Many-Worlds”) quantum mechanics. (I’m sorry I was too busy to contribute to the active comment thread on that post. On the other hand, a vanishingly small percentage of the 200+ comments actually addressed the point of the article, which was that the potential for many worlds is automatically there in the wave function no matter what formulation you favor. Everett simply takes them seriously, while alternatives need to go to extra efforts to erase them. As Ted Bunn argues, Everett is just “quantum mechanics,” while collapse formulations should be called “disappearing-worlds interpretations.”)

Like the textbook formulation, Everettian quantum mechanics also comes with a list of postulates. Here it is:

- Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
- Wave functions evolve in time according to the Schrödinger equation.

That’s it! Quite a bit simpler — and the two postulates are exactly the same as the first two of the textbook approach. Everett, in other words, is claiming that all the weird stuff about “measurement” and “wave function collapse” in the conventional way of thinking about quantum mechanics isn’t something we need to add on; it comes out automatically from the formalism.

The trickiest thing to extract from the formalism is the Born Rule. That’s what Charles (“Chip”) Sebens and I tackled in our recent paper:

Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics

Charles T. Sebens, Sean M. CarrollA longstanding issue in attempts to understand the Everett (Many-Worlds) approach to quantum mechanics is the origin of the Born rule: why is the probability given by the square of the amplitude? Following Vaidman, we note that observers are in a position of self-locating uncertainty during the period between the branches of the wave function splitting via decoherence and the observer registering the outcome of the measurement. In this period it is tempting to regard each branch as equiprobable, but we give new reasons why that would be inadvisable. Applying lessons from this analysis, we demonstrate (using arguments similar to those in Zurek’s envariance-based derivation) that the Born rule is the uniquely rational way of apportioning credence in Everettian quantum mechanics. In particular, we rely on a single key principle: changes purely to the environment do not affect the probabilities one ought to assign to measurement outcomes in a local subsystem. We arrive at a method for assigning probabilities in cases that involve both classical and quantum self-locating uncertainty. This method provides unique answers to quantum Sleeping Beauty problems, as well as a well-defined procedure for calculating probabilities in quantum cosmological multiverses with multiple similar observers.

Chip is a graduate student in the philosophy department at Michigan, which is great because this work lies squarely at the boundary of physics and philosophy. (I guess it is possible.) The paper itself leans more toward the philosophical side of things; if you are a physicist who just wants the equations, we have a shorter conference proceeding.

Before explaining what we did, let me first say a bit about why there’s a puzzle at all. Let’s think about the wave function for a spin, a spin-measuring apparatus, and an environment (the rest of the world). It might initially take the form

(α[up] + β[down] ; apparatus says “ready” ; environment

_{0}). (1)

This might look a little cryptic if you’re not used to it, but it’s not too hard to grasp the gist. The first slot refers to the spin. It is in a superposition of “up” and “down.” The Greek letters α and β are the amplitudes that specify the wave function for those two possibilities. The second slot refers to the apparatus just sitting there in its ready state, and the third slot likewise refers to the environment. By the Born Rule, when we make a measurement the probability of seeing spin-up is |α|^{2}, while the probability for seeing spin-down is |β|^{2}.

In Everettian quantum mechanics (EQM), wave functions never collapse. The one we’ve written will smoothly evolve into something that looks like this:

α([up] ; apparatus says “up” ; environment

_{1})

+ β([down] ; apparatus says “down” ; environment_{2}). (2)

This is an extremely simplified situation, of course, but it is meant to convey the basic appearance of two separate “worlds.” The wave function has split into branches that don’t ever talk to each other, because the two environment states are different and will stay that way. A state like this simply arises from normal Schrödinger evolution from the state we started with.

So here is the problem. After the splitting from (1) to (2), the wave function coefficients α and β just kind of go along for the ride. If you find yourself in the branch where the spin is up, your coefficient is α, but so what? How do you know what kind of coefficient is sitting outside the branch you are living on? All you know is that there was one branch and now there are two. If anything, shouldn’t we declare them to be equally likely (so-called “branch-counting”)? For that matter, in what sense are there probabilities *at all*? There was nothing stochastic or random about any of this process, the entire evolution was perfectly deterministic. It’s not right to say “Before the measurement, I didn’t know which branch I was going to end up on.” You know precisely that one copy of your future self will appear on *each* branch. Why in the world should we be talking about probabilities?

Note that the pressing question is not so much “Why is the probability given by the wave function squared, rather than the absolute value of the wave function, or the wave function to the fourth, or whatever?” as it is “Why is there a particular probability rule at all, since the theory is deterministic?” Indeed, once you accept that there should be some specific probability rule, it’s practically guaranteed to be the Born Rule. There is a result called Gleason’s Theorem, which says roughly that the Born Rule is the only consistent probability rule you can conceivably have that depends on the wave function alone. So the real question is not “Why squared?”, it’s “Whence probability?”

Of course, there are promising answers. Perhaps the most well-known is the approach developed by Deutsch and Wallace based on decision theory. There, the approach to probability is essentially operational: given the setup of Everettian quantum mechanics, how should a rational person behave, in terms of making bets and predicting experimental outcomes, etc.? They show that there is one unique answer, which is given by the Born Rule. In other words, the question “Whence probability?” is sidestepped by arguing that reasonable people in an Everettian universe will act *as if* there are probabilities that obey the Born Rule. Which may be good enough.

But it might not convince everyone, so there are alternatives. One of my favorites is Wojciech Zurek’s approach based on “envariance.” Rather than using words like “decision theory” and “rationality” that make physicists nervous, Zurek claims that the underlying symmetries of quantum mechanics pick out the Born Rule uniquely. It’s very pretty, and I encourage anyone who knows a little QM to have a look at Zurek’s paper. But it is subject to the criticism that it doesn’t really teach us anything that we didn’t already know from Gleason’s theorem. That is, Zurek gives us more reason to think that the Born Rule is uniquely preferred by quantum mechanics, but it doesn’t really help with the deeper question of why we should think of EQM as a theory of probabilities at all.

Here is where Chip and I try to contribute something. We use the idea of “self-locating uncertainty,” which has been much discussed in the philosophical literature, and has been applied to quantum mechanics by Lev Vaidman. Self-locating uncertainty occurs when you know that there multiple observers in the universe who find themselves in exactly the same conditions that you are in right now — but you don’t know which one of these observers you are. That can happen in “big universe” cosmology, where it leads to the measure problem. But it automatically happens in EQM, whether you like it or not.

Think of observing the spin of a particle, as in our example above. The steps are:

- Everything is in its starting state, before the measurement.
- The apparatus interacts with the system to be observed and becomes entangled. (“Pre-measurement.”)
- The apparatus becomes entangled with the environment, branching the wave function. (“Decoherence.”)
- The observer reads off the result of the measurement from the apparatus.

The point is that in between steps 3. and 4., the wave function of the universe has branched into two, but *the observer doesn’t yet know which branch they are on*. There are two copies of the observer that are in identical states, even though they’re part of different “worlds.” That’s the moment of self-locating uncertainty. Here it is in equations, although I don’t think it’s much help.

You might say “What if I am the apparatus myself?” That is, what if I observe the outcome directly, without any intermediating macroscopic equipment? Nice try, but no dice. That’s because decoherence happens incredibly quickly. Even if you take the extreme case where you look at the spin directly with your eyeball, the time it takes the state of your eye to decohere is about 10^{-21} seconds, whereas the timescales associated with the signal reaching your brain are measured in tens of milliseconds. Self-locating uncertainty is inevitable in Everettian quantum mechanics. In that sense, *probability* is inevitable, even though the theory is deterministic — in the phase of uncertainty, we need to assign probabilities to finding ourselves on different branches.

So what do we do about it? As I mentioned, there’s been a lot of work on how to deal with self-locating uncertainty, i.e. how to apportion credences (degrees of belief) to different possible locations for yourself in a big universe. One influential paper is by Adam Elga, and comes with the charming title of “Defeating Dr. Evil With Self-Locating Belief.” (Philosophers have more fun with their titles than physicists do.) Elga argues for a principle of *Indifference*: if there are truly multiple copies of you in the world, you should assume equal likelihood for being any one of them. Crucially, Elga doesn’t simply assert *Indifference*; he actually derives it, under a simple set of assumptions that would seem to be the kind of minimal principles of reasoning any rational person should be ready to use.

But there is a problem! Naïvely, applying *Indifference* to quantum mechanics just leads to branch-counting — if you assign equal probability to every possible appearance of equivalent observers, and there are two branches, each branch should get equal probability. But that’s a disaster; it says we should simply ignore the amplitudes entirely, rather than using the Born Rule. This bit of tension has led to some worry among philosophers who worry about such things.

Resolving this tension is perhaps the most useful thing Chip and I do in our paper. Rather than naïvely applying *Indifference* to quantum mechanics, we go back to the “simple assumptions” and try to derive it from scratch. We were able to pinpoint one hidden assumption that seems quite innocent, but actually does all the heavy lifting when it comes to quantum mechanics. We call it the “Epistemic Separability Principle,” or *ESP* for short. Here is the informal version (see paper for pedantic careful formulations):

ESP: The credence one should assign to being any one of several observers having identical experiences is independent of features of the environment that aren’t affecting the observers.

That is, the probabilities you assign to things happening in your lab, whatever they may be, should be exactly the same if we tweak the universe just a bit by moving around some rocks on a planet orbiting a star in the Andromeda galaxy. *ESP* simply asserts that our knowledge is separable: how we talk about what happens here is independent of what is happening far away. (Our system here can still be *entangled* with some system far away; under unitary evolution, changing that far-away system doesn’t change the entanglement.)

The *ESP* is quite a mild assumption, and to me it seems like a necessary part of being able to think of the universe as consisting of separate pieces. If you can’t assign credences locally without knowing about the state of the whole universe, there’s no real sense in which the rest of the world is really separate from you. It is certainly implicitly used by Elga (he assumes that credences are unchanged by some hidden person tossing a coin).

With this assumption in hand, we are able to demonstrate that *Indifference* does not apply to branching quantum worlds in a straightforward way. Indeed, we show that you should assign equal credences to two different branches *if and only if* the amplitudes for each branch are precisely equal! That’s because the proof of *Indifference* relies on shifting around different parts of the state of the universe and demanding that the answers to local questions not be altered; it turns out that this only works in quantum mechanics if the amplitudes are equal, which is certainly consistent with the Born Rule.

See the papers for the actual argument — it’s straightforward but a little tedious. The basic idea is that you set up a situation in which more than one quantum object is measured at the same time, and you ask what happens when you consider different objects to be “the system you will look at” versus “part of the environment.” If you want there to be a consistent way of assigning credences in all cases, you are led inevitably to equal probabilities when (and only when) the amplitudes are equal.

What if the amplitudes for the two branches are not equal? Here we can borrow some math from Zurek. (Indeed, our argument can be thought of as a love child of Vaidman and Zurek, with Elga as midwife.) In his envariance paper, Zurek shows how to start with a case of unequal amplitudes and reduce it to the case of many more branches with equal amplitudes. The number of these pseudo-branches you need is proportional to — wait for it — the square of the amplitude. Thus, you get out the full Born Rule, simply by demanding that we assign credences in situations of self-locating uncertainty in a way that is consistent with *ESP*.

We like this derivation in part because it treats probabilities as epistemic (statements about our knowledge of the world), not merely operational. Quantum probabilities are really credences — statements about the best degree of belief we can assign in conditions of uncertainty — rather than statements about truly stochastic dynamics or frequencies in the limit of an infinite number of outcomes. But these degrees of belief aren’t completely subjective in the conventional sense, either; there is a uniquely rational choice for how to assign them.

Working on this project has increased my own personal credence in the correctness of the Everett approach to quantum mechanics from “pretty high” to “extremely high indeed.” There are still puzzles to be worked out, no doubt, especially around the issues of exactly how and when branching happens, and how branching structures are best defined. (I’m off to a workshop next month to think about precisely these questions.) But these seem like relatively tractable technical challenges to me, rather than looming deal-breakers. EQM is an incredibly simple theory that (I can now argue in good faith) makes sense and fits the data. Now it’s just a matter of convincing the rest of the world!

One problem I have with all of these attempts to get the born rule (the problem applies equally to your approach and to the Deutsch-Wallace approach) is that they all go like this.

1. Assume decoherence gets you branches in some preferred basis.

2. Give an argument that the Born rule applied to the amplitudes of these branches yields something worthy of the name ‘probability.’

The problem is that these steps happen in the reverse order that one would like them to happen. Look at step one. Decoherence arguments involve steps

1.a) showing that as the system+detector gets entangled with the environment, the reduced density matrix of this entangled pair evolves such that all the off-diagonal elements get very close to zero,

and

1.b)reasoning that therefore, each diagonal element corresponds to an emergent causally inert “branch.”

But step 1.b is fishy insofar as it happens before step 2. Who cares if the little numbers on the off-diagonals are very close to zero, until I know what their physical interpretation is? Not all very small numbers in physics can be interpreted as standing in front of unimportant things. Now, if we could accomplish step 2, then we could discard the off-diagonal elements, because we know that very small _probabilities_ are unimportant. But the cart has been put in front of the horse. We can’t conclude that the “Branches” are real and causally inert and have independent “obsevers” in them _until_ we have a physical interpretation of the off-diagonal elements being small. But all of these Everettian moves do 1.b first, and only afterwards do 2.

Well-loved. Like or Dislike: 31 3

“With this assumption in hand, we are able to demonstrate that Indifference does not apply to branching quantum worlds in a straightforward way”

– This is where I lose the thread of the argument. This is the key problem with MWI and I would appreciate an intuitive explanation. It’s like you’ve just proved that 2+2=5 but the key step is “straightforward but technical”

7 4

Eric, I’m not sure I follow the worry. The fact that the off-diagonal elements are small tells us that the different branches don’t interfere with each other in terms of their future evolution. I.e., I could evolve a branch forward in time, and the result is completely independent of the existence of the other branches. That doesn’t seem to rely directly on any probability interpretation, but maybe I’m missing something.

8 4

Rationalist– Have a look at the paper. Sometimes arguments just have to be technical.

Well-loved. Like or Dislike: 14 6

Sean,

Even if the off-diagonal elements are small, they are nonzero, so technically the branches still interfere, correct? How does this happen, and how can this be measured?

Well-loved. Like or Dislike: 10 1

The argument you give here shows how, given a particular wave function, the Born Rule gives the right credences for observing various outcomes. But how do you know the wave function in the first place?

In the real world, we know wave functions by observing relative frequencies of outcomes. For example, if you tell me that the device in your lab produces electrons with the (spin) wave function 1/sqrt(2) |up> + 1/sqrt(2) |down>, and I ask you how you know that, you’re not going to show me a mathematical derivation of what credence you should assign to up vs. down; you’re going to show me data from the test runs you made of the device, that recorded equal numbers of up electrons and down electrons.

But it seems to me that, if the MWI is true, we can’t draw that conclusion from the test data, because if the MWI is true, *any* wave function with nonzero amplitude for both |up> and |down> will produce a “world” in which equal numbers of up and down electrons are observed. So I don’t see how your argument justifies assigning equal amplitudes to |up> and |down> based on such test data.

Well-loved. Like or Dislike: 14 2

Stewart– In principle, yes. But the numbers are incredibly super-tiny — you’d be better off looking at a glass of cool water and waiting for it to spontaneously evolve into an ice cube in a glass of warm water.

Peter– That’s something else we discuss in the paper. We show that the ordinary rules for Bayesian inference and hypothesis-testing are perfectly well respected by EQM. Of course unlikely things will happen, but that’s not what one should expect. It’s a big multiverse, so someone is going to be unlucky and experience very low-probability series of events (just as they would in a big classical universe).

9 7

Sean,

In principles, yes, so isn’t there anything sort of wrong about that? Anything that can happen, in quantum mechanics, will happen. So since the off-diagonals are nonzero, how will this interference take place when it happens? Can this be measured?

7 3

I admit I looked at the paper just so I could see your solution to the quantum sleeping beauty problem. Looks great! Makes me feel like some philosophical dilemmas really do have answers.

Like or Dislike: 2 3

Hi Sean,

OK, maybe that helps. But let me be clear about what you are saying. Suppose for simplicity that my system plus detector evolves into only two “branches”. You say “I could evolve a branch forward in time, and the result is completely independent of the existence of the other branches” I take it you really mean, as you say in response to Steward, that the degree to which they are not completely independent is represented by numbers that are incredibly super tiny. But I still have no physical interpretation of those tiny effects. you say “you’d be better off looking at a glass of cool water and waiting for it to spontaneously evolve into an ice cube in a glass of warm water” but I don’t know how you can say what impact those small numbers have on what I am likely to see until you have interpreted them as relating to probabilities.

Well-loved. Like or Dislike: 13 1

Sean,

The problem with the Everettian interpretation is that it assumes that QM is fundamentally correct. I think this is a fairly unsafe assumption and we have lots of (indirect) evidence to tell us that QM is incomplete.

Sure, if QM is complete as we know it, MWI is the simplest explanation. But it seems much more likely that QM is in fact not complete, and therefore any conclusions derived by assuming it is complete are meaningless.

7 11

Hidden due to low comment rating. Click here to see.

Poorly-rated. Like or Dislike: 8 26

Sean, you say that “Quantum probabilities are really credences … rather than statements about truly stochastic dynamics or frequencies in the limit of an infinite number of outcomes.” I don’t totally understand what you’re saying here — in the MWI, aren’t the probabilities both credences and frequencies? If I do a long sequence of approximately identical experiments, the quantum probabilities tell me the frequencies with which the different outcomes will be present in my branch of the wavefunction.

Well-loved. Like or Dislike: 5 0

What about quantum recombination? Surely any physicality of all those copies is impossible if you intend to reconstitute a superposition in the same apparatus? Now you have a super super position of being in many worlds and not being in many worlds at the same time.

Like or Dislike: 1 1

Sean,

Several remarks.

The first is that classical physics does indeed allow us to describe multiple worlds provided that we interpret classical probabilities according to something like David Lewis’s modal realism. When studying the evolution of classical probability distributions, all the states “are just there” in the formalism, so why not simply accept that they exist in reality, as one does in EQM?

My second remark is about axioms. All logical claims consist of premises (axioms), arguments that follow from those premises, and conclusions, and EQM is no different. Proponents often suggest that EQM doesn’t need as many axioms as the traditional interpretations. But the trouble with EQM is that although it seems at first like you don’t need very many axioms, the truth is that you do. Simply insisting that we don’t mess with the superposition rule isn’t enough. Quantum-information instrumentalism (say, QBism) doesn’t mess with superpositions either, and allows arbitrarily large systems to surperpose. Declaring that we must interpret the elements of a superpositions as physical parallel universes is therefore an affirmative, nontrivial axiom about the ontology of the theory, even if some people might regard it as an “obvious” axiom.

The pointer-variable argument also implicitly assumes axioms as well. We have to declare that something singles out a preferred basis (for the cat, this means that we need to single out the alive vs. dead basis, rather than, say, the alive+dead vs. alive-dead basis). You can keep adding additional systems and environments, but at some point you have to declare that once you’ve added enough, you can shout “stop!” and pick your preferred basis. And what is our criterion for picking that basis? That’s going to be another axiom! And if you pick locality or something like that for specifying your preferred-basis-selection postulate, you have to contend with the fact that locality may not be a fundamental feature of reality once we figure out quantum gravity, so if we do add locality as part of our axiom for picking the preferred basis, the EQM interpretation is now sensitive to features of quantum gravity that we don’t know yet.

Finally, are you assuming that there’s some big universal wave function that evolves unitarily? Given all we know about eternal inflation, is this a reliable assumption anymore? Even if you’re willing to accept it, it represents another axiom to add on.

The problem with EQM is that this process of adding axioms keeps going on (your “epistemic separability principle,” for example, is another axiom, and far from an obvious one!), and even then we still have to contend with the serious trouble of trying to make sense of the concept of probability starting from deterministic assumptions, a serious philosophical problem on par with the is-ought problem of Hume.

So, to summarize, you can’t justifiably start by saying “Hey, I only need two axioms!” and then inserting additional axioms (some implicitly) as you proceed. At the end of the day, you’ll have as many axioms as (or more than), say, instrumentalism, but then you still have the weirdness of deriving probability from non-probability.

Well-loved. Like or Dislike: 27 2

kashyap vasavada: Shorter – “Squirrel!”

Like or Dislike: 2 2

Hidden due to low comment rating. Click here to see.

Poorly-rated. Like or Dislike: 0 7

OK, biologist here so take it easy on me.

I’m still having trouble with the cartoon representation of MWI where you have a film splitting into two films. Is this supposed to apply only in (simple) cases of binary events? I understand the value of focusing on simple examples (spin-up/down), but what is the cartoon representation for a continuous range of possibilities (electron position)? Does the film split into an infinity of films? (A film shmear?)

[Asked in previous post but too late for answer.] Am I allowed to think of MWI as many superpositions rather than many universes? When the cartoon-filmstrip splits, I imagine all mass/energy doubling. However, when I think of Schrödinger’s Cat, it never occurred to me that you had 10 lbs of cat (before box closed), then somehow 20 lbs of cat (during superposition), then 10 lbs again when I observe it. It’s always been called Schrödinger’s Cat (singular) rather than Schrödinger’s Cats (plural) even before collapse. So why now must we have many worlds rather than one world in superposition?

And I have to ask (even though the answer seems obvious): Are there more worlds today than there were yesterday?

“There are still puzzles to be worked out, no doubt, especially around the issues of exactly how and when branching happens, …” I thought that a major appeal of this approach is that nothing “happens”. We have continuous evolution rather than “collapses” or any other magic moments.

Well-loved. Like or Dislike: 20 1

@Reader297:

Thank you. I am a complete and abject layperson whose skill is reading English, not grasping the mathematics of quantum mechanics. But grasping English alone can you get you a little ways with a message as clear and consistent and easily stated as the Everettian premise: The sophisticated mathematical construction called the wave function, which to date matches quantum observations perfectly, describes the physical superposition of macroworlds. Then I read something like this article — a series of ideas, formulations, qualifications, theories and axioms dedicated to untying knots that, golly, just weren’t there in the beginning when I was promised the breath of simplicity itself — and nothing is quite so plain as the fact that there is nothing at all obvious about the “Many Worlds” interpretation, and that, for all the “evidence” at hand, the physical reality, if any, represented by the wave function is as far from being glimpsed as it has ever been.

Like or Dislike: 5 1

About to run away, so some selective responses–

Eric– I think there is a fair point here, and I’m not sure I’ve thought it through completely. My feeling would be that it’s correct to say (1) off-diagonal terms are small, so branches evolve almost-independently, therefore (2) we can assign probabilities to branches, and once we do that we can (3) ask about the probability of the off-diagonal terms growing large and witnessing interference between branches. At the very least it seems like a self-consistent story.

D– The probabilities are credences at each individual branching. Of course they can lead to frequencies if you do many individual trials of some kind of experiment.

Charlie– The detailed process of branching is a technical problem worthy of more study, no doubt. As you say, there aren’t really any problems with energy conservation, once you understand how it works in regular quantum mechanics. (If you like, the thing that is conserved is the energy times the amplitude squared.)

10 6

Sean, so you said:

“(1) off-diagonal terms are small, so branches evolve almost-independently, therefore (2) we can assign probabilities to branches, and once we do that we can (3) ask about the probability of the off-diagonal terms growing large and witnessing interference between branches.”

So, “almost-independently” isn’t the same as “actually independently”, but let’s leave that aside for now. If the off-diagonal terms do grow large, then that invalidates the “off-diagonal terms are small” assumption, therefore the branches are definitely not independent, therefore you cannot assign probabilities. There’s nothing circular/inconsistent about this?

OK…so I probably don’t understand this too well. Heck, I never even read your paper.

Like or Dislike: 6 2

I agree completely that the MWI is the simplest form of QM, and like the “disappearing-worlds interpretations.”

The real question is not whether MWI is a better way of looking at QM. The real question is whether QM is correct. Every physical law found to date has either proved itself an approximation, or is waiting for its day. QM is exceedingly likely just another law waiting for its day to end.

So if you assume that QM is in someway wrong, will infinite dimensional Hilbert spaces and perfect linearity remain? Because without those things MWI is a non starter.

MWI is built upon the one part of QM that is weakest – the collapse.

Most of the alternative ‘explanations’ of QM have an obvious place where collapse occurs due to limited bandwidth (any non linearity). It will have to be experiment that proves QM wrong, as it is pretty firmly entrenched in the Physics Community.

6 4

Hidden due to low comment rating. Click here to see.

Poorly-rated. Like or Dislike: 1 8

Sean,

“There are still puzzles to be worked out, no doubt, especially around the issues of exactly how and when branching happens, and how branching structures are best defined. […] But these seem like relatively tractable technical challenges to me, rather than looming deal-breakers.”

I would really like to see how the pointer basis problem can be considered a technical challenge, let alone a tractable one. At best, you’ll need an additional set of axioms in the theory, which should fix the choice of the basis. But the looming feeling is that the task of actually formulating these axioms is equivalent to resolving the measurement problem and the Schrodinger’s cat paradox. And that may prove to be much more difficult than a mere technical challenge — just remember that people like von Neumann tried, failed and gave up on that challenge — so it’s certainly not going to be easy.

Best,

Marko

Well-loved. Like or Dislike: 9 0

Hidden due to low comment rating. Click here to see.

Poorly-rated. Like or Dislike: 2 7

Sean, perhaps this concern is besides the point, but let me rephrase my discomfort with the discrete nature of the story the MWI tells. As you explained extremely well here, the same physical system can have multiple descriptions, each appropriate for experiments done at different length or energy scale. So suppose we look at scattering of protons and use our detector to measure some aspect of the final state. How many “worlds” do we have in the end? Do we think about all possible states of the final proton, or the vastly more complex story in terms of fluctuating quarks and gluons (which certainly interacted strongly with the measuring device)? Preferably those stories equivalent in some sense, but I am not sure in what precise sense they are.

Well-loved. Like or Dislike: 12 0

Sean you are a cosmologist so you can be forgiven for taking Many-Worlds seriously.

All you have done is drop qm axiom #4 and replaced it with amplitudes and ESP. But we may just as well drop other of the qm axioms and get other realist interpretations instead (e.g Bohmianism or GRW).

And it would be nice if you addressed the fact that our free choice of measurement leads to the set of possible outcomes, so that, in a sense, humans decide which universes are created in the splitting. But that’s odd!

6 8

I have a bit of a far-out question. In the path integral approach to QM, we see that two possibilities interfere — both contribute to the final amplitude before it is squared — only if they can converge onto the same physical state. All the ways of getting from the same state to the same state interfere. I assume that is equivalent to being entangled, but perhaps I am misunderstanding that. So the idea that two branches have split means, as I understand it, that there is no (or vanishingly small) possibility of them evolving to the same state.

Now, to throw in something I really know nothing about, I am vaguely aware that there are “bouncing universe” theories of cosmology. What I am wondering is if they involve different branches all converging on the same final state — which I imagine means non-unitary evolution, information being lost, which seems like it couldn’t be. But it seems like to be viable, these bouncing universe theories must get you back to a low-entropy post-bounce initial condition, and it’s a little hard to imagine how that could happen if that initial condition retained all the information specifying exactly which branch you were on before the bounce.

So that’s the vague understanding behind my question, but to state my question simply: in bouncing universe scenarios, do different branches converge on the same final state at the bounce? And if they do, doesn’t that mean they are entangled?

Like or Dislike: 4 0

“Now it’s just a matter of convincing the rest of the world!” , I wouls say not to worry about convincing anybody since in infinite number of universes you have already convinced everybody.

You just happen to be in wrong infinity .

Well-loved. Like or Dislike: 13 3

I am a layperson here also, a Physician fascinated by cosmology, listened to all the Teaching Company courses related to physics and read all the cosmology books aimed at lay people. I have a reasonably good grasp on the wave function and collapse of the wave function for small particles- how that leads to electron tunneling and other strange but true phenomena.

I absolutely cannot grasp the thought that there is a wave function for a complex large object. Sean used an orange in one of his lectures. Really? How about an animal? On a microscopic level the many parts of an orange or parts of an animal aren’t even close to each other. Can something larger than a molecule really have just one wave function?

I would be so appreciative if any of the physicists here could help me understand this question. Thank-you.

Like or Dislike: 2 0

“a vanishingly small percentage of the 200+ comments actually addressed the point of the article”

What were the chances?

Like or Dislike: 4 0

Jerry, I have two answers to your question. First, the simplest example of a wave function for a single particle confined to a 1-dimensional box is not just the wave function for the particle, it is the wave function for the particle confined to a box of a given size. Choose a box of a different size and you get a different wave function. The box can be a micron or a mile or lightyear in length.

But getting back to the wave function for the orange, the wave function would be the wave function for all the atoms in the orange. psi = psi(x1,x2,… xN). Then the probability of finding particle 1 near x1 particle 2 near x2, etc is given by the magnitude of psi squared.

The wave function for the many-particle orange is a vastly more complicated object than the wave function for a single particle in a box. But even in the simple case of a lone particle in a box the wave function is not just for the particle it is for the box/particle system and the box can be enormous.

Like or Dislike: 3 0

Thank-you John-

That makes so much more sense to me than to think of the wave function of the orange as a whole to be the same as to that of an electron or proton.

Now, since the atoms inside the orange are interacting with each other- isn’t that a measurement? Isn’t that an “Observation”? Doesn’t that collapse the wave function for the orange as a whole? Even though we can’t know the position and momentum of each electron or quark in the orange, don’t we now know the position and momentum of the orange?

Like or Dislike: 0 0

Moshe– The “how many worlds” question is a good one, when we think about realistic situations. (I’ve heard experts say it’s just not a well-defined question, but I haven’t completely understood the argument.) But there’s no ambiguity concerning e.g. protons vs. quarks. You just look at what is entangled with what. In an ordinary nucleon, the quarks and gluons are entangled with each other in a very particular way to make the lowest-lying state. That state just has a couple of remaining quantum numbers (spin, position) that can possibly entangle with the outside world and lead to decoherent branches.

Jerry– According to quantum mechanics of any sort, there is actually only one wave function for the entire universe. Each living being is a part of it, just like each atom or particle is.

7 5

Does this analogy help Jerry: when you listen to a symphony orchestra, there’s just one soundwave detected by your ears. It’s the brain that interprets that wave as a combination of violins, flutes, horns, oboes etc., and the better your musical ear the more easily you can mentally separate the wave that came into your ear into its constituent causes. If your “ear” is especially discriminating, you can attend to individual harmonics of a single instrument–but this is all in the interpretation; the pressure wave (“soundwave”) could be decomposed into sinusoidal components in any number of ways. The analogy, then, is that there’s a single wave function for the entire universe, but we can interpret parts of it to be associated with various smaller things (like cats) and sub-things (like their whiskers) and sub-sub-things (molecules)–etc. etc.

Well-loved. Like or Dislike: 8 1

The reliance on entanglement would work well with LQG. It would be interesting if entanglement, long ignored, was this grand mechanism responsible for dynamic interaction.

Like or Dislike: 0 3

Zurek is definitely worth a read.

Experimentally studying the boundary of coherence – decoherence is vitally important. If anything, these studies have practical benefit to see how ‘macroscopic’ we can begin to manipulate entangled states.

I am curious if there are any known ways to stimulate emission of 3.55 keV photons (like those described here http://arxiv.org/abs/1402.2301)? In light of this discussion: what part of our universe can make such states if there are no known atoms (machinery) or coherent interactions with fields that generate them. Is it some complex quantum mechanical explanation, or is it more likely a semi-classical explanation for the presence of such an x-ray line?

Like or Dislike: 1 0

I think that a lot of the problems that people have with the multiple worlds idea (and really to quantum mechanics in general) are related to time. Things are usually presented as distinct worlds branching or splitting to become separate worlds, (or collapse occurring) and this happens “as time goes by”. I will try to quickly put together a string of thoughts here, but certainly it won’t be very clear. Hopefully just writing something down clarifies things in my own mind a little bit. Sorry if I start rambling.

I think it is very useful to think about physics (and the nature of reality) from the basic point of view that time doesn’t flow at all, and what we usually think of as time is not really a singular uni-directional phenomenon.

Most quantum mechanics examples and experiments use microscopic particles. Time is easily shown to have no preferred direction for microscopic particles. The examples used to show this are clear only because of the small numbers of possible options in each direction of time. As soon as one direction starts to have many more possible states than the other, then you can tell which is the future and which is the past. The future just has more options than the past, that is why it is the future.

These options are actually the components of entropy. It isn’t that there is just more entropy in the future, rather, the future is simply defined by the fact that it is the direction of more options.

But what does that really mean? It probably means that in the ‘future’ direction, the universal wave function has more small scale structure. More parts of it behave independently, or have decoherence with other parts. This is the ‘splitting’ of worlds. These decoherent branches are the ‘options’ that define entropy. Without distinct options, entropy doesn’t make sense, and time doesn’t appear to flow in one direction.

Of course the ‘time flowing’ and ‘world splitting’ are just illusions resulting from our position in the system. All of the branches actually do interact with each other, it is just mostly in the direction that we call the ‘past’.

It is only because we ourselves are actually a small part of this large system that we can not see the time independence. We are constrained by the decoherent structure to only be able to observe toward the past direction. This is what makes quantum mechanics and relativity seem difficult and illogical to people.

Most of the difficult to grasp principles of quantum mechanics are made simpler if you think of time as just another direction. Take the EPR paradox; where the measurement of a state of an entangled particle “instantaneously” determines the result of a measurement separated by a distance of millions of light years (seemingly faster the speed of light). The problem is the use of the term instantaneous. These particles can ‘move’ backward in time as easily as forward in time (just as they can ‘move’ up or down, or left or right).

If you think of ‘the outcome’ of the final measurement of one of the particles traveling backwards in time (physically with the particle), to the point where the entanglement occurred, and then forward in time with the other particle to when that one is measured (and vice versa), then there is never any action at a distance. All actions are local. This seems strange, but it is only because we can’t see the whole picture.

Quantum mechanical interactions are actually very similar to classical interactions, IF you consider time to be the same as the other spacial dimensions. The complication arises because time seems to have a property that differentiates it from the other dimensions: there are more options in one direction versus the other. However, this is also an illusion.

All of these dimensions are actually part of the same ‘thing’ which is the universal wave function. What we see as ‘THE time dimension’ is just whichever direction has the most options when seen from any particular spot. Time is defined by the entropy, which is defined by the decoherence branches of the wave function.

Time dilation and length contraction in special relativity are what you start to observe when more than one dimension starts to have a large number of options (decoherence branches). It is no longer so obvious which direction is the “time” dimension. You can then see that ‘before’ and ‘after’ are not definite things, but it is nonetheless always a consistent system because it is all one wave function.

While it isn’t totally clear how to put gravity and general relativity together with quantum mechanics, that is surely because of the issues of thinking about movement, velocity, and acceleration when time is not a distinct parameter.

Clearly some types of particles interact with other particles not only in the classical three dimensions, but rather in four dimensions, such that the time dimension for these interactions is not in exactly the same direction as the bulk of the surrounding particles. This can lead to seemingly stationary particles experiencing acceleration, such as gravity.

Applying this same kind of thought to the many worlds interpretation makes it easier to see that all of the ‘worlds’ do interact with each other, but mostly through the ‘past’ direction. The separate ‘worlds’ are only separate from our point of view. There is no need to worry about conservation of energy or mass or whatever people get hung up on. Everything is part of the same wave function, which is time independent.

4 7

Shawn,

Hope you will share results from the workshop on when and how branching occurs when you have time. I guess they’re technical issues to the faithful but may seem more fundamental to the agnostics.

Thanks for the great blog.

Like or Dislike: 2 0

As a theory, Many Worlds is in a bad state, and this paper is an example of why.

If someone tells me that there are many quantum worlds in a single wavefunction, I expect that they can tell me exactly what part of a wavefunction is a world, and how many worlds there are in a given wavefunction.

As Sean says in his article, a naive attempt to be concrete about what a world is, and how many there are in a given wavefunction, leads to something which *disagrees* with experiment.

But rather than regard this as a point against Many Worlds, and rather than try new ways to carve up the wavefunction into definite worlds… instead we have contorted sophistical arguments about how you should *think* in a multiverse, as the explanation of the Born rule.

The intellectual decline comes when people stop regarding Born probabilities as frequencies, and stop wanting a straightforward theory in which you can “count the worlds”.

Common sense tells me that if A is observed happening twice as often as B, and if we are to believe in parallel universes, then there ought to be twice as many universes where A happens, or where A is seen to happen. But a detailed multiverse theory in which this is the case is hard to construct (Robin Hanson is one of the few to have tried).

Instead what we are getting (from Deutsch, from Wallace, now here) are these rambling arguments about decision theory, rationality, and epistemology in a multiverse. They all aim to produce a conclusion of the form, “you should think that A is twice as likely as B”, without having to exhibit a clear picture of reality in which A-worlds are twice as *common* as B-worlds.

A technical debunking of such arguments always ought to be possible. In the present case, it must have something to do with the use of these epistemic principles, “Indifference” vs “ESP”, but I still haven’t decoded it. What I want to do in this comment, is just to arm the reader with a general defense against this pernicious new trend in Many Worlds apologetics.

My suggested rule of thumb is this: if a Many Worlds theory *doesn’t* explain the Born rule by counting worlds, look upon it with suspicion, or just ignore it.

P.S. Sean cites Gleason’s theorem as a reason to think that probabilities in a quantum multiverse must come from the square of the amplitude. So please, Everett fans, why not try to come up with an exact and objective theory about how the wavefunction subdivides into worlds, that is somehow inspired by Gleason’s theorem? Rather than spreading confusion and an illusion of understanding.

Well-loved. Like or Dislike: 33 5

Mitchell Porter,

“If someone tells me that there are many quantum worlds in a single wavefunction, I expect that they can tell me exactly what part of a wavefunction is a world, and how many worlds there are in a given wavefunction.”

If there were a way to specify which part of the wavefunction is “a world”, it would be (more or less) straightforward to count how many of them are there, and use their frequencies as probabilities. Due to the separability axiom for the Hilbert space in QM, there would be at most countably infinitely many “worlds” in a given wavefunction, and the number of appearances of each particular “world” could be, well, counted.

But the main problem of MWI is that actually

there is no wayto specify which part of the wavefunction is “a world”. This is a serious problem of MWI, acknowledged by MWI fans (including even Sean, although he tries his best to avoid talking about it), and is called “the pointer basis problem”. It is the raison d’etre for all those additional axioms in the textbook version of QM, as compared to MWI. It lies at the core of the measurement problem and the Schrodinger cat paradox (see my previous comment).MWI, as it stands, has no solution to this problem, and it can be resolved only by postulating additional axioms. These additional axioms will in turn kill the argument of parsimony that MWI fans are so fond of.

HTH,

Marko

Well-loved. Like or Dislike: 19 1

Is there an experimental test for the MWI? Can it be falsified?

Like or Dislike: 3 1

I do not believe there is any logical pathway from Schrodinger’s equation describing a quantum system to there must be many universes in which every possible outcome of every “measurement” that has ever taken place is realized.

Instead, we need to treat Schrodinger’s equation as a model that works, not as absolute “gospel”. The Copenhagen Interpretation, as far as I’m concerned, is a model, and a very good one. And that’s all that we can ever hope to achieve in science. If we find a set of equations that accounts for observations, then we are doing good science. However, we shouldn’t overly extend those equations to a point wherein there is no logical pathway joining the two together.

For example, a typical optimization problem in first year calculus is finding the length of a rectangle that maximizes area, given certain constraints. Usually, we need to solve a quadratic equation for the length, and we get a positive solution, and a negative solution. The positive solution is the correct, physically relevant solution, and the negative solution is not physically relevant. We don’t take every mathematical solution seriously. Just because it comes out of the math, doesn’t mean it’s right. If, somehow, many worlds comes out of quantum mechanics, doesn’t mean it’s right. Our mathematics serve as useful models, and nothing more. There is no logical pathway from quadratic equation to negative length. In turn, there is no logical pathway from Schrodinger’s equation to many worlds.

Well-loved. Like or Dislike: 11 2

Hidden due to low comment rating. Click here to see.

Poorly-rated. Like or Dislike: 1 8

I was struck by sponsorship of your upcoming workshop by the templeton foundation. hope they don’t have one of their pets participating.

Like or Dislike: 4 2

Hidden due to low comment rating. Click here to see.

Poorly-rated. Like or Dislike: 0 7

A pragmatic question. Since so far MWI has no new predictions compared to the original collapse model, how can one hope to convince the adepts of other “formulations”? If I believe in objective collapse, with all the extra postulates, or in Bohmian mechanics, with its pilot wave, why should I change my mind based on logic alone, in defiance of the scientific approach, where experiment is the ultimate arbiter?

Clearly simplicity alone is not good enough, since [SU(3)xSU(2)xU(1)x 3 generations x 20+ parameters x dark matter x dark energy x GR x initial conditions] is anything but simple. This rather complicated and incomplete model won over much simpler alternatives thanks to many extensive cycles of modeling, experimenting, observing and revising. Why should EQM be different?

Like or Dislike: 5 1

“Common sense tells me that if A is observed happening twice as often as B, and if we are to believe in parallel universes, then there ought to be twice as many universes where A happens, or where A is seen to happen. But a detailed multiverse theory in which this is the case is hard to construct (Robin Hanson is one of the few to have tried).”

Let’s take the Schrodinger’s Cat case and postulate that usually the cat lives, but sometimes enough oxygen molecules tunnel their way out of the box so that the cat suffocates. Whether a specific O2 does or does not tunnel out of the box is a split, so there are many more universes in which the cat lives than those in which it dies, and the amplitudes measured over many such unethical experiments would be consistent with this.

Another case: a single electron is in an energy well. Sometimes (rarely) it bounces out of the well, but usually it does not. What if each different height it bounces is a split? Then again, there are many more universes in which it does not bounce high enough to get out than universes in which it does.

Like or Dislike: 3 5

I just watched you on YouTube Sixty Symbols talking about the “embarrassment” that there’s no consensus about “meaning” of QM, although it works perfectly. I was wondering that since we (well, you professionals) seem to agree that some deeper theory is needed that will unite GR and QM, is it possible that it’s just too early to try to fully understand QM? Didn’t it take two hundred years to begin to understand Newton’s gravity? Could this be like Einstein’s futile attempt at unification before all the fields and particles were discovered?

I imagine this probably seems very naive, and that everyone has thought of this many times!

Like or Dislike: 3 3

There are three questions I needed to address, but I was unable to add to my earlier comment about quantum recombination where for examples neutrons are split into separate state but then the state description is merged again into the original state.

So the first question is whether or not multiple worlds is at all reasonable, and it plainly isn’t. The whole problem with Everettian metaphysics is that it specifies an hierarchical tree ontology, whereas our friend mother nature prefers lattices. However my denouncement of Everettianism did not actually address the justification for Born’s hypothesis, which is two additional questions.

Traditional mathematics was very poor at describing an oriented surface. Not so when bivectors are introduced. A bivector is an oriented surface and belongs to a much improved conceptual algebra for describing physics, known as “Geometric Algebra”. In particular the treatment of rotations is vastly improved, and rotational kinematics. Think particle physics, think rotational dynamics. (Think Relativity, again think Lorentz boots which are rotations too).

Multiplication of bivectors is a natural operation, and leads to measures of magnitude that are geometric, and also meaningful. The Schwarz inequalities are fundamental properties of such spaces, and from there, a la Pythagorus, you get to natural definitions of magnitude. This is the domain where conserved quantities appear. As an example, angular momentum appears as an area in orbital theory, if you recall Keplers law. Equal areas traced in equal time. The mathematical-physical object behind it being a “rotor”.

It turns out that rotors tend to dominate all kinds of kinematical equations, which gives a direct dynamical link to the interpretation of bivector products, and their algebraic properties. You can spot that a mile away, when Planck’s constant appears, there is angular momentum along for the ride.

So there are two aspects to the Born rule. One is whether or observables would take the form of product. This has been strongly justified by for entirely algebraic reasons.

The second question is whither the observations arise, which is indeed a question of interpretation.

Initially Born in fact suggested the function without a square, until suggested the square as a footnote. Schrodinger guessed at his equations too, as did others. But the question would remain the same regardless of whether it was a cube or a fourth and a half power that worked in practice. But of course it is the square that works, and so the answer lies where it always has in physics, in the investigation of the mathematics and the foundations therein that lead to it being successful as applied to the model.

Geometric algebra is new, but already it has made dramatic simplifications in the way that the physical viewpoint is expressed, and real insights have been revealed. This is obviously the way forward. I doubt very much that new age psycho-science can contribute anything useful.

Like or Dislike: 2 5

I guess this is trivial but another difficulty with equiprobable universes is the following: if the squared amplitudes are incommensurate* then the right probability law cannot be obtained with a finite number of universes.

* (say alpha squared = 1/sqrt(2) and beta squared = 1 – alpha squared)

Like or Dislike: 3 0

Sorry, missed your reply for a while, and I’m not sure if you’re still wadding through the comments.

“(1) off-diagonal terms are small, so branches evolve almost-independently, therefore (2) we can assign probabilities to branches, and once we do that we can (3) ask about the probability of the off-diagonal terms growing large and witnessing interference between branches. ”

Step (1) cannot mention “almost-independence.” It can only mention small numbers, which, if they were interference terms (a probabilistic notion–look at what interference is in the two-slit experiment) would be measures of the degree of independence. Then, in step (2), in all the approaches I’m familiar with, we talkabout emergent observers living in the branches. I’m not sure I understand the legitimacy of talking about emergent observers. All I see, at this point, is a big mushy wave function with some small off-diagonal elements in some of its representations. And now (I would obviously say) step three makes no sense to me.

Well-loved. Like or Dislike: 5 0

Pingback: alQpr » Blog Archive » Measurement in QM

The problem with presenting MWI in a textbook is that the 2 mathematical postulates (1) and (2) lacks any connection to experiments and observations. And nobody have been able to come up with simple MWI versions of the rest of the postulates, as this article and consequent discussions make clear.

And MWI is not helped by the all the discussions about the born rule. I don’t know why people are so obsessed with deriving it from (1) and (2). Just postulate it and move on. Until the preferred basis problem is resolved or better understood, any proof will be flawed.

The real contribution of Everett was the removal/replacement of the collapse postulate. I would like to see the MWI postulates/rules covering the same areas as the standard postulates.

Like or Dislike: 3 3

Hi Sean,

It would be great if you did your next book was on EQM.

Thanks,

Jake

Like or Dislike: 1 0

The MWI is a tempting siren. Telling Sean that he is wrong will not work because he is stuck in the limbo of thinking it could be right (which would be cool) and knowing there is no way to prove it wrong. I could do some probability derivation tricks, but this is probably not the right crowd. Maybe we should come back to the postulates of quantum when everyone has cooled off a bit. In my experience the best thing to do with someone stuck in a quantum spiral is to give them a different paradox that will distract them. A lot of the confusion I’ve seen in quantum is simply due to the semantics of the theory. Don’t worry Sean I’ll make up an easy relativity problem to help you get out of your quantum doldrums. Be right back…..

Like or Dislike: 3 6

Funny, as a layman, I always thought that was the practical meaning of the probablity. The world “splits” and most of the split ends up where most of the probablity lies. Yes there are worlds where the particle ends up in some remote corner of the universe rather than hitting the screen in one of the bright bars of the twin slit, but most of them are worlds where the particle lands on the bright bars (in proportion to the square of the amplitude).

Like or Dislike: 0 0

“Exactly how and when branching happens” is the rub. It’s the giant elephant for MWI because you are at bottom replacing the quantum realm with a classical one(explaining quantum mechanics through a classical artifice).

Like or Dislike: 2 1

Pingback: Quantum Sleeping Beauty and the Multiverse | Sean Carroll

@Jerry: Since Sean did not give you an elaborate answer, I will try to answer your interesting question as a physicist, since many of my friends and relatives are intelligent physicians and they ask such questions! Although I am not convinced by MWI, I believe, there is hardly any doubt that Quantum Mechanics is mostly right. Every object has a De broglie wavelength = h/(mv) where h is Planck’s constant, m is mass and v is the speed. So for small objects like electrons the wavelength will be small and sometimes measurable like in electron microscope. For you and me it is extremely small like 10^ (-35) cm or so. It is extremely small to find any measurable effect. Thus the whole universe and all of us could have a wave function but it will be hard to verify. Incidentally, closeness is a relative concept. The microscopic universe is essentially vacuum. An analogy to atom is a football field with electrons in the stadium and nucleus is roughly a football at the center of the stadium, with vacuum in between. Of course this is a crude description for visualization. For calculational purpose they are mostly wavy stuff.

Like or Dislike: 0 0

You do realize that people at the LHC are claiming that the Higgs Boson doesn’t have the correct mass for the multiverse theories right?

Like or Dislike: 1 1

Is there a good reason why the precise verbal description of Born’s rule, that the probability equals the

abssquared amplitude (as in the mathematical expression) is abbreviated to just “the squared amplitude?” Is it a standard abbreviation? Until I realized that that’s how this post calls it in every instance, I was sure it’s a mistake.Like or Dislike: 0 0

It’s just convenient and short.

Like or Dislike: 1 0

I have an interpretational question that seems appropriate in this context. As I understand it, Feynman’s sum-over-histories approach gives an amplitude to each history, and you can apply the Born rule to get a probability for the given history. Is it completely crazy to apply this approach to the whole universe, and to postulate that there really only is one world, but it’s drawn from the space of all possible universe-histories with a probability distribution given by the Born rule applied to the amplitudes of the different histories?

My intuition about this is that, until I start talking about the One True Universe, I’m just describing one way of formulating Everettian QM. And that the strongest objections to postulating the One True Universe are philosophical. It violates symmetry, it’s not useful for talking about physics, and it violates Occam’s razor. But it might also be the case that there’s a fundamental reason the One True Universe model can’t be right. It might reflect a fundamental misunderstanding I have.

So am I just wrong or merely multiplying entities beyond necessity?

Like or Dislike: 1 0

Probability? I think it is time to re-interpret Copenhagen and discuss the absolute.

Einstein couldn’t find it because the speed of light stood in his way. And science can’t find it because they went the wrong way.

Certainty any One?

=

Like or Dislike: 0 2

Thank-you Dr Vasavada-

I do understand that in normal circumstances the tiny universe is nearly empty. Some exceptions I guess would be neutron stars and the plasma of particles in the first nanoseconds of the universe.

It appears to some of us amateurs that photons, electrons, protons, etc travel in waves and that their exact position and momentum cannot be known, as per the uncertainty principle.

It was my understanding that the “wave function” describes all the possible positions of the particle and the likelihood of the particle being in any of the possible positions was given by the square of the amplitude.

I was also of the belief that when a particle interacts with another particle, such as a photon kicking an electron into a higher energy level, for that instant the wave function collapses and the location of the event can be observed. A measurement is made.

My thought is that I cannot get my head around a single wave function describing

a complex living creature, but I can certainly imagine it as the integration of all the wave functions of the individual particles- ie. the sound of the symphony heard from all the instruments.

Please excuse my naivety in my posts- but this blog is followed by many of us trying to achieve better understanding.

I work in a busy Emergency Dept so I live the “many worlds” theory on a daily basis.

Thank-you all again,

Like or Dislike: 0 0

Sean,

I have a different interpretation. Whenever a measurement takes place, the universe splits into 100 universes. The number of universes containing outcome A is just the probability times 100, etc. Now, if the probabilities are decimals, then the universe splits into however many universes as needed to have whole numbers of universes with each outcome. If the probability is, for example, 1/3 = 0.333333333…, then the universe splits into an infinite number of copies.

How do you like my interpretation?

Like or Dislike: 1 3

I think that the fact that psi^star psi is the time component of a conserved

four-current density should play a major role in any explanation

of why psi^star psi represents a probability density.

Like or Dislike: 0 0

I’m not a physicist. I am an arm chair philosopher however. I believe that a satisfying answer to the reason by the probability is the square of the amplitude is because there are TWO state vectors. One evolving forward in time and the other in reverse. I haven’t read all the responses, but it seems to me that many physicists are not aware of Yakir Aharonov’s work (weak measurements, etc.). Here’s a link to the arXiv server of a paper titled “Measurement and Collapse within the Two-State-Vector Formalism”:

http://xxx.lanl.gov/pdf/1406.6382

For me this provide a resonable answer to the “collapse” issue.

Like or Dislike: 0 1

@Jerry Salomone : You did raise interesting questions. So I jumped in! Let me add something to the concept of wave function for a composite system. Upto certain energy a composite particle may behave like an elementary particle. For example, for low energies (nuclear physics) there is no need to consider quark substructure of protons and neutrons. Writing a wave function of a proton at low energies in terms of products of wave functions of quarks, while not wrong, is unnecessarily complicated and one gets tangled into unnecessary mess. Until high energies it is perfectly OK to write a single wave function for a nucleon. Only at higher energies one has to consider quark sub structure. Similarly in condensed matter physics, whole atom may be regarded as having a single wave function.

Another thing: people are finding quantum effects for larger and larger systems e.g. in lasers, superconductivity, Bose-Einstein condensation and entanglements have been found at distances of several miles. Many physicists think there is nothing like classical mechanics! It is all quantum! So my guess is that eventually people may find quantum effects in biological systems and perhaps in brain and consciousness!

Thanks for working in emergency dept. I may need people like you some time!

Like or Dislike: 0 0

You listed 2 postulated from Everett’s QM.

You should also add:

3. the wave function should describe all the particles that form an observer and its memory

4. you can deduce “relative states” of the model by deducing what the modeled observer has measured by examing its memory

Including the observer and its memory is how Everett avoided the collapse.

Like or Dislike: 0 0

As you note in the paper, your approach directly shows how to derive the Born rule in finite dimensional Hilbert spaces when the coefficients of the wavefunction in your preferred basis are square-root-of-rational multiples of one another. By appealing to some continuity principle, you extend to the case where the coefficients are arbitrary real multiples of each other.

But you don’t say anything about when the coefficients are non-real multiples of each other (which is generically the case). Can you account for non-real coefficients?

Like or Dislike: 0 0

Given a specific state, you can always choose basis vectors such that the amplitudes are real.

Like or Dislike: 3 0

I’m curious.

What considerations you’ve given to actually doing what Everett writes in his paper, of using the mathematics of particle physics to construct an observer, an “automatically functioning machine with sensors and a memory” as he writes.

In 1956, such a wave equation describing that large number of particles was surely a hypothetical notion.

But with the computing power of 2014, it should be within reach.

Like or Dislike: 0 1

Sean,

I very much appreciate your tie-in of self-location to the EQM Born rule. I have done something similar in my recent dissertation, although taking, I think, a quite different approach. Our starting points (self-location) and ending points (Born rule), however, are much the same, and I think we are pumping the same intuitions, one way or another.

Here is my reaction so far. First of all, I think anything that claims to be a response to the EQM Born rule objection must add something new beyond Gleason and Gleason-noncontextuality (GNC). Since GNC + Gleason’s theorem = Born rule (an analytic fact) this is an absolute must, although it seems not to be widely recognized for some reason.

The problem for me lies in whether ESP is any more intuitive and natural an assumption than GNC, from an EQM perspective. You defend ESP in physicalist terms, by talking about physical intuitions, which granted you are trying to minimize. However, the whole point of EQM, is that the Born rule follows simply from the idea of a purely

formalsystem (without any prior physical interpretation) in which there are these emergent phenomena called “observers”. Now we ask, assuming that the system describes multiple outcomes for one of these observers when they perform a measurement, how should that observer assign probabilities to these outcomes? Prior ideas of what is “natural” physically are not allowed. Only the formal wavefunction is allowed, plus presumably some principles or other to enable us to discern observers within the wavefunction. But these would be like principles of finding eddies in a river; they would describe emergent phenomena that are not fundamental constituents of the system. The wavefunction itself need not be “physical” but could be a simulation on a computer, or just left as an abstract mathematical object. It is purely formal. There is no inherent “space” and “time” and so on, because these are not formal entities, but physical ones.So my question: why not just stick with GNC, since it is very analytic, and perfectly clear what it means? It requires no hand-wavey physical principles, no environment or environment-induced decoherence either — and would even be perfectly applicable to a solipsistic wavefunction, should we ever encounter one. Also, its truth seems much easier to justify, given its clarity, so long as one is an

objectivistabout probabilities (not excluding here the possibility that subjectivists may have their own reasons for accepting it). Thus, it can be justified (or refuted) prior to any discussion of EQM, at the level of choosing a foundation for probability theory. This cuts off the Born rule objectors before we even get to QM and challenges them to prove their faith in world-counting first,thenargue against many worlds.If this sounds critical, believe me when I say I am on your side! I just wonder if it might be more fruitful to challenge the objectors where they are truly weakest: their unquestioned faith in world counting. I have never understood where one gets world counting or observer counting as a basic a priori principle, in the first place, but many people out there seem to think it is “just obvious”. But without it, there is no really viable Born rule objection to EQM, because there is no reasonable alternative measure on the table.

My own take on this can be found in my dissertation. Ch. 4 is my interpretation of probability (neither Bayesian nor frequentist, but algorithmic) and Ch. 5 is my discussion of self-location and self-selection, including Sleeping Beauty. Ch. 8 uses this foundation to make a start on an algorithmic reconstruction of QM:

http://hdl.handle.net/10315/27640

Like or Dislike: 2 2

Sean,

Regarding Gleason, one thing I find a bit confusing (and I may just be misunderstanding your language) is your statement:

” There is a result called Gleason’s Theorem, which says roughly that the Born Rule is the only consistent probability rule you can conceivably have that depends on the wave function alone. So the real question is not “Why squared?”, it’s “Whence probability?” ”

Gleason’s Theorem depends on noncontextuality, which I argue follows from objectivism. But it does not simply follow “from the wavefunction alone”, in the usual sense of that phrase.

Perhaps when you say “depends on the wavefunction alone” you mean that the Born rule is the only measure that is based on objective features of the wavefunction, rather than emergent properties. However, then you say that the real problem is “whence probability?”.

The whole idea of the alternative measure (world-counting) is that it

doesgenerate, just like the Born rule, a consistent nontrivial probability measure (“nontrivial” meaning not always just 0 or 1). It just isn’t upheld by experiment. So the real problem can’t just be “whence probability?” That isalsoanother “probability objection” to EQM that has been expressed — why should there be any nontrivial probabilities at all? — but this is an entirely different objection (and with much less substance, I think) than the Born rule objection.Remember that in EQM we are allowed the wavefunction

plus(for good reason) the idea of an observer with a memory, as an emergent property. Since a subjective view of probability sees nontrivial probabilities as inherently “emergent”, a subjectivist world-counting measure might still follow “from the wavefunction alone”, in the sense that is relevant here.After all, there would not be a Born rule debate if Gleason proved it already from the wavefunction alone, since the claim of the objectors is exactly that no such proof has been produced,

andthat they have an alternative (nontrivial) measure.Like or Dislike: 3 1

Allan– I don’t necessarily disagree with your approach (“necessarily” inserted only because I won’t claim to have studied it carefully). There can be many different arguments to get the same right answer, and Chip and I are in favor of a plurality of approaches to deriving the Born Rule.

Having said that, I feel as if appeals to Gleason’s theorem aren’t addressing the actual worries that anti-Everettians have. The question really is primarily physical, rather than mathematical — everyone agrees that Gleason’s theorem is valid under the appropriate assumptions. Having interacted with well-informed and thoughtful anti-Everettians, it seems to me that their questions/objections tend to take forms like “Why shouldn’t I just count worlds?”, or “Why couldn’t I define arbitrary measures that are proportional to the number of descendants I have on each branch?”, or “Why should I think there is any uniquely-defined probability measure at all?”, or “Why shouldn’t my measure depend on what’s happening in other branches?”, or “Why are you even talking about probabilities when the theory is entirely deterministic?” These are all fundamentally physical/philosophical questions, not mathematical ones. So we tried to address them on their own terms, and we think that ESP is the kind of principle that most people would be willing to accept on basic physical grounds, independently of what theory we are working in.

For anyone who has alternative ways of deriving the Born Rule in EQM, our basic attitude is: great! I’m not sure how much is gained by looking for the “best” derivation. There are plenty of challenging questions about EQM beyond deriving the Born Rule, I’d rather think about those.

Like or Dislike: 1 0

Hi Sean,

I don’t find much to outright disagree with there, but I guess I just prefer a different approach, going right back to the foundations of probability theory, rather than attempting to provide the “Born rule proof” that they are always asking for. But you are also right that the best approach is many approaches. And you are also right that the objectors themselves usually do ask for “physical relevance”, and there seems to be general agreement that the objection isn’t about the GNC. But I see this as a kind of trap. And by responding “in their own terms”, we fall right into it. No such proof will ever make them happy, since their objection is about “physical relevance”, which is purely intuitive and ill-defined, so the only way to respond “on their terms” is to make some assumptions about what is “physically relevant” that are as fuzzy as their request was to begin with. And since this is all fuzzy intuition, whatever principle you come up with, they will always be able to point out that you haven’t

provedthis principle, and since EQM claims to work from the formalism alone, you should be able toproveit.This is the nature of the trap. There is something fishy going on here. They cannot have it both ways. We are either working purely formally, a la Everett’s postulates, or we are demanding physical relevance. You cannot demand both.

Assume, as you suggest, that the GNC is not fundamentally what is being objected to. This leaves us with something like the following.

Everettian: “Probability follows from the formalism alone.”

Objector: “No it doesn’t, because we could just count worlds.”

Everettian: “Not if GNC holds, and this is a very intuitive, reasonable, and clear assumption.”

Objector: “I agree it is acceptable so far as it goes. But it lacks physical relevance.”

Everettian: “So you agree that it is a reasonable assumption to make, in terms of acceptable measures on a vector space?”

Objector: “Yes, but I need physical relevance.”

Everettian: “But we are addressing what follows from the formalism alone — that is the whole idea here — so we are not concerned with physical relevance.”

Objector: “No, I need it.”

Everettian: “Ok, then, here is a version of GNC that has physical relevance. Look, I have applied the precise and analytically clear GNC to your fuzzy intuitive physical concepts and produced a version that, while lacking clarity and suffering from ambiguity, does indeed have physical relevance, and is still fairly simple.”

Objector: “But you haven’t

provedit. You said that EQM could deliver probabilities from the formalism alone — that is the whole idea here — so you need to prove it.”Everettian: “Then you object to the GNC? You want me to prove that?”

Objector: “No not at all, the GNC is fine. I just need you to add physical relevance.”

Everettian: “But you just said I had to prove it from the formalism alone!”

Objector: “Yes, but

in a physically relevant way!”Thus will they wiggle out of any argument you give them. Nothing short of a full explication of how conscious experience (and hence “physical intuition”) arises out of the interaction of subatomic particles will satisfy such a request — and I hope no one believes that

thatis a reasonable thing to ask of a physical theory.In other worlds, either the

realproblem lies in the GNC, after all, or they are baiting you. I really do feel they need to be challenged more to defend world-counting (or whatever alternative they feel exists). I mean, really defend it, in terms of foundational principles of probability theory, not with weak and fuzzy hand-waving, or pleas of “Why can’t I just count worlds?”.My answer to “Why can’t I just count worlds?” is “Go ahead, count worlds, but give me an argument for

whyworlds are the things that should count.”Insisting on counting worlds is a little bit like counting colours in the typical probability problems we all remember from school. Take a bag with 10 marbles, 7 red and 3 blue. Pick one out at random. What is the probability that it is red? According to the world counters — “Why can’t I just count colours?” — the answer is 1/2. But of course the answer is really 7/10. This is because we count marbles, not colours. We count marbles

categorized into colours. Categories are in the numerator, countable primitive objects are in the denominator. Colours are categories, not objective things like marbles (well, assume for now marbles are objective things!). So we count marbles, straight-out in the denominator, and categorized in the numerator by whatever category pleases us.Worlds are categories, not things. Amplitudes are the things. This approach makes perfect sense in terms of a very basic high school conception of probability. What is the argument for counting worlds as things, given wavefunction realism? There may be one. But the objector needs to provide a systematic answer to this question if we are to consider their “objection” to be clearly stated. And it seems to me, that an argument of this nature will be an argument about the foundations of probability theory, and will have nothing whatever to do with quantum mechanics.

Like or Dislike: 1 2

You didn’t address my question about possible convergence of different branches onto a common final state (e.g., in a “bouncing universe” scenario). I’d really be curious to hear your thoughts on that. My reason for asking, of course, is that if they can converge on a common final state then my understanding is we really have to say that both exist, in the same sense that we can’t say that the electron went through one slit or the other, but that both possibilities contributed. And that seems like a big problem for our intuitive sense that one possibility or another actually happened (which is not necessarily a problem for the physics, granted). But surely that is one motive for the non-Everett interpretations involving some sort of “wave function collapse” — to match our intuition that the outcome of the measurement happened and the other things didn’t happen, period. You can keep that intuition in the Everett interpretation at least for your own branch if the different possibilities never interfere in determining a common final state, but if they ever could converge on a common final state it means that intuition is just wrong. So I’d be very interested in your thoughts on whether the different branches can ever converge on the same final state. Thanks.

Like or Dislike: 1 0

There is one thing I never quite understood about branching. If you have a singular wavefunction for the universe, then under whatever quantum theory completely describes all aspects of it, we should expect it to be a stationary state, with only the phase changing in time due to energy conservation. Any reduction of state along a branch would no longer be an eigenstate of the universal Hamiltonian. As a result, we would now have indeterminate total energy in that branch’s universe. I understand the ensemble of such branches would still resemble the original wavefunction, but every observer along a branch should be unable to determine the energy content of that universe. Determining the energy should in fact reverse every branching up until that point. How does this fit in with MWI?

Like or Dislike: 1 0

Daniel,

Any interpretation of QM worth discussing should allow for a general-relativistic and second-quantized formulation. This means that it should never rely on concepts such as time, energy, particle, Hamiltonian, etc. Even the concept of “Schrodinger’s equation” should only be interpreted loosely, as “some differential equation linear in the state vector”, i.e. without assuming anything more about its structure or variables.

All these quantities appear only in approximations, when (or rather if) certain initial/boundary conditions are met. Most of them cannot even be defined at the fundamental level of the theory. So when discussing the interpretations of QM properly, you should try to rephrase your questions such that you don’t use these notions — otherwise they will only confuse you.

HTH,

Marko

Like or Dislike: 2 0

Marko,

True, I’m considering a purely first quantization framework, which is not valid for a universal description. Nonetheless, ignore my use of energy to label eigenstates. The universal state description should still be a stationary state and if there is any state reduction along a branch, the reduction may not necessarily belong to the spectrum of the universe’s state operator. While a superposition of all branches would amount to such a valid state, each branch on its own would possibly encounter this problem. Another way to phrase it is that the branchings would define a basis that is not necessarily the pointer basis as defined by the universe’s (as in multiverse) state operator.

Unless the claim is that the multiverse is in some superposition of these pointer states, but intuitively I would think the actual state of the universe in a given world would determine this state from the beginning, that further interactions would not be needed to progressively branch the state into one deterministically evolving pointer state.

And again, I’m not trying to disprove MWI here via contrived counter example, I’m using this example to express a misunderstanding I’m having with MWI.

Like or Dislike: 0 0

Dear Sean, i understand that the issue is not the ‘squaring’ but the origin of the probability. However, i was thinking if its reasonable to think that the origin of the squaring, hence the probability, is that squaring the wave function is measuring the area of some kind of a surface of sphere around the quantum system where the radius of the sphere is the amplitude, something like a holographic sheet that surrounds the quantum system under study which encodes the quantum information. So for spin states, we would have a sphere surface splitted into two parts (halves if the amplitudes are 1/sqrt(2) )where each part correspond to a state, the area of each part (normalised to the total surface area) is the probability of finding the system in that part. The (pi) would be not relevant here because normalisation deletes it and only the square of the amplitude stays. Does that make sense?

Like or Dislike: 1 0

As if God playing dices was not bad enough now he must play dices and get all the numbers at each throw..

MWI and multiverse dont tell us much about reality but a lot about the ego of physicists that fell the need to imagine themselves in an infinite number of copies

Like or Dislike: 2 4

The fact that you have to take squared modulus is due to the fact that we have arranged for the wave function to be linear. We didn’t need to do that. You can write QM in pure density matrix form and you will avoid the squared modulus. The pure density matrix is elegant in that it represents particles in terms of the relationship between where their initial and final state. It is an operator. In this form, the mathematics of QM is done with only one sort of object, the operator. The usual QM needs two objects, for example in finite problems, NxN operators and Nx1 states. In terms of QFT, pure density matrices correspond to propagators while states correspond to creation operators (kets) and annihilation operators (bras). One of the Landau and Lipshitz books mentions this correspondence in a footnote, IIRC, and Schwinger noted that you can define the creation and annihilation operators by taking one coordinate of a propagator (Green’s function) and setting it to a constant that he called the “fictitious vacuum”.

Like or Dislike: 2 2

I’ve read over the paper more carefully now and the only question I have is, how are you justified in taking reduced density matrices without use of the Born Rule? You form the reduced density matrix in your derivation of the Born rule but a reduced density matrix is a partial trace. Specifically, it’s a full trace over one component of a separable Hilbert space, which means you’ve assumed the probability measure over the Hilbert space denoting the environment. Because of the decoherence condition, the system’s state is maximally entangled with that of the environment’s, so it’s no surprise that when you apply the Born Rule on the environment’s Hilbert Space, you get the same probabilities for the corresponding states of the system.

Am I missing something here? Is the reduced density matrix a justified construct when one does not have the Born Rule?

Like or Dislike: 2 0

Daniel– “Constructing the reduced density matrix” is a purely mathematical process, completely well-posed whether or not you have the Born Rule. The question is what meaning we should attach to it, which is what our argument addresses. In Appendix B we address this in gruesome detail, and in the shorter paper we do the whole thing without ever using density matrices, just to assuage skepticism.

Of course, you do need to use the

inner producton Hilbert space to construct the reduced density matrix. But the inner product is part of the theory, and nobody is going to make sense of quantum mechanics without assuming it.Like or Dislike: 1 1

Thank you for clarifying the goal of your derivation, Sean. The way I see it, deriving the Born rule in such a framework means showing it is the unique way to assign a probability measure to the Hilbert space of quantum mechanics. Perhaps it’s just lost on me, but I would think the Born rule is implicitly contained in the proposition that a Hilbert space describes quantum mechanical states (via Gleason’s theorem). It seems like this is an interpretation-independent result. I was viewing your method as an alternative to Gleason’s theorem here, but you’ve corrected me and said you’re looking to assign meaning to why Gleason’s theorem would hold, at least on an intuitive level.

I guess where I can’t take the leap is assuming that this way of counting the probability is the ontologically-entailed, unique method for correlating the elements of a Hilbert space with elements of reality even within an Everettian framework. I’m convinced it’s a consistent way of viewing quantum mechanics, but that’s about as far as I could go with this.

Like or Dislike: 0 0

Daniel– As I said in the post, Gleason’s theorem (or Zurek’s envariance, or various applications of “frequency operators”) provides a good way to argue that “if you want to assign probabilities to branches of the wave function, Born’s Rule is the way to do it.” We view our contribution as explaining why assigning probabilities to branches of the wave function is a sensible thing to do. Namely, in EQM you deterministically evolve from perfect certainty to self-locating uncertainty, and once there you have a unique way of assigning credences that satisfies the ESP.

Like or Dislike: 0 1

I think I’m seeing your point more clearly now. You agree that no matter the choice of heuristic tool you use to describe a Hilbert space, the Born rule is the probability measure that will result. So your goal is then to explain why Everettian quantum mechanics would have probabilities as elements of (or resulting from relationships among the elements of) its ontology. So I guess the question is if this is the unique framework for generating probability in a many worlds interpretation. Your assignments are unique within the framework, but I wonder if the framework itself is really the only way. An interesting read overall, thanks for engaging my curiosity.

Like or Dislike: 0 0

Great, I think that puts us on the same page. We argue in the paper that Born probabilities follow uniquely from ESP, so if you believe ESP you don’t have much freedom. Of course you can choose to not accept ESP, then we can’t help you. We do argue against some naive versions of branch-counting, but in the end it’s a free country. (Personally if I have a set of simple assumptions that make predictions, and those predictions fit the data, I’m ready to move onto other questions.)

Like or Dislike: 1 1

Out of curiosity though, why do you posit that each possible outcome is equally real? In philosophy, possible worlds are used as a way to posit counterfactual situations with the aid of modal logic. Why do these possible worlds not suffice, is it because it requires an additional explanation for why only one branch (or rather, set of observers along that branch) is privileged to be “real?” One could avoid such a question by relinquishing the assumption that reality is deterministic and instead argue that the determinism of quantum mechanics is the determinism of possibility, not necessity (referring to modal logic concepts of possibility and necessity here). Possibility is a weaker assumption than necessity after all.

Like or Dislike: 0 0

Our attitude is that we take the theory at face value. According to EQM, there is a wave function that evolves unitarily, and it branches over time. Deciding that certain branches aren’t real seems like extra work that we see no reason to do. (That’s why Ted Bunn suggests that we refer to alternatives to EQM as “disappearing worlds interpretations.”)

Like or Dislike: 1 1

I can see why such a view is attractive but I’d think positing that a statistical distribution is a real object belonging to the ontology of physics and not a device for organizing measured outcomes is a stronger assertion. It really depends on how you build up the quantum theory, how you describe it in the first place. I could take a unitarily evolving wavefunction and claim this represents a statistical distribution of possibilities.

In either case, the argument still has to be made why Newton’s laws (in the form of the galilei group) don’t act directly on the objects themselves, but on the distributions of measurements of these objects. It’s one thing to assert that the distribution is an ontological object, it’s another to reason it or derive it. A case could still be made that only on average do Newton’s laws hold on measurements and the distributions being held to such mechanics are a result of this constraint.

Interesting stuff to think about, I think I’m still an agnostic for now with regards to this issue.

Like or Dislike: 0 0

I’ve been musing about the status of the wave function for the spin before measurement occurs in EQM. The premeasurement situation with the apparatus ready (environment 0) is the result of many past branchings. Therefore there are multiple different apparata in different “worlds.” Which apparatus (which world) will the wave function for the spin interact with? A reasonable answer is that it interacts only with apparata whose wave function evolved from the same environment (environment -1) that the spin state evolved from. But how are these interactions selected from all the other interactions that could occur?

I see a couple of possible resolutions.

1. The premeasurement wave function for the spin is not isolated. In fact it is entangled when it is created, with environment -1. It can’t become entangled with any other environment because of this preexisting linkage to the environment, so it won’t be measured by any apparatus except those that evolved from environment -1.

A criticism of approach (1) is that by the time the wave function for the spin reaches the apparatus, the apparatus and environment -1 have evolved to a multitude of new apparata and environments, none of which exactly matches environment -1 any longer. How could the measurement occur if we require preexisting coherence for there to be entanglement between apparatus and spin?

A potential answer to the criticism is that the premeasurement wave function for the spin is not entangled with all details of environment -1, only with its pointer states. As long as the pointer states of the apparatus in environment 0 remain coherent with those of environment -1, the measurement can occur.

2. In fact the spin function does interact with apparata in many different worlds. From the perspective of an apparatus, measurements occur continuously as it detects spin functions coming from different worlds (emitted by other (environment -1) states than the one that the apparatus evolved from). Is this consistent with observation? We do see quantum foam in experiments. Perhaps with proper accounting for conservation of energy, we could explain the virtual particles of quantum foam as a consequence of EQM.

I’m interested in your thoughts on the above. A one line answer “see chapter X of my book” would be fine if you’ve already treated this question somewhere. Thanks.

Like or Dislike: 0 0

To toot my own horn a bit, as well as some other scholars':

Eric Winsberg’s objection sounds similar to one raised independently by W. Zurek, A. Kent and myself, e.g. in

http://www.sciencedirect.com/science/article/pii/S1355219806000694

Sean C, I’ve really enjoyed following your work with Chip on this question. It’s a great paper you two have put together.

Like or Dislike: 2 0