The question of the day seems to be, “Is the wave function real/physical, or is it merely a way to calculate probabilities?” This issue plays a big role in Tom Banks’s guest post (he’s on the “useful but not real” side), and there is an interesting new paper by Pusey, Barrett, and Rudolph that claims to demonstrate that you *can’t* simply treat the quantum state as a probability calculator. I haven’t gone through the paper yet, but it’s getting positive reviews. I’m a “realist” myself, as I think the best definition of “real” is “plays a crucial role in a successful model of reality,” and the quantum wave function certainly qualifies.

To help understand the lay of the land, we’re very happy to host this guest post by David Wallace, a philosopher of science at Oxford. David has been one of the leaders in trying to make sense of the many-worlds interpretation of quantum mechanics, in particular the knotty problem of how to get the Born rule (“the wave function squared is the probability”) out of the this formalism. He was also a participant at our recent time conference, and the co-star of one of the videos I posted. He’s a very clear writer, and I think interested parties will get a lot out of reading this.

———————————-

**Why the quantum state isn’t (straightforwardly) probabilistic**

In quantum mechanics, we routinely talk about so-called “superposition states” – both at the microscopic level (“the state of the electron is a superposition of spin-up and spin-down”) and, at least in foundations of physics, at the macroscopic level (“the state of Schrodinger’s cat is a superposition of alive and dead”). Rather a large fraction of the “problem of measurement” is the problem of making sense of these superposition states, and there are basically two views. On the first (“state as physical”), the state of a physical system tells us what that system is actually, physically, like, and from that point of view, Schrodinger’s cat is seriously weird. What does it even mean to say that the cat is both alive and dead? And, if cats can be alive and dead at the same time, how come when we look at them we only see definitely-alive cats or definitely-dead cats? We can try to answer the second question by invoking some mysterious new dynamical process – a “collapse of the wave function” whereby the act of looking at half-alive, half-dead cats magically causes them to jump into alive-cat or dead-cat states – but a physical process which depends for its action on “observations”, “measurements”, even “consciousness”, doesn’t seem scientifically reputable. So people who accept the “state-as-physical” view are generally led either to try to make sense of quantum theory without collapses (that leads you to something like Everett’s many-worlds theory), or to modify or augment quantum theory so as to replace it with something scientifically less problematic.

On the second view, (“state as probability”), Schrodinger’s cat is totally unmysterious. When we say “the state of the cat is half alive, half dead”, on this view we just mean “it has a 50% probability of being alive and a 50% probability of being dead”. And the so-called collapse of the wavefunction just corresponds to us looking and finding out which it is. From this point of view, to say that the cat is in a superposition of alive and dead is no more mysterious than to say that Sean is 50% likely to be in his office and 50% likely to be at a conference.

Now, to be sure, probability is a bit philosophically mysterious. It’s not uncontroversial what it means to say that something is 50% likely to be the case. But we have a number of ways of making sense of it, and for all of them, the cat stays unmysterious. For instance, perhaps we mean that if we run the experiment many times (good luck getting that one past PETA), we’ll find that half the cats live, and half of them die. (This is the Frequentist view.) Or perhaps we mean that we, personally, know that that the cat is alive or dead but we don’t know which, and the 50% is a way of quantifying our lack of knowledge. (This is the Bayesian view.) But on either view, the weirdness of the cat still goes away.

So, it’s awfully tempting to say that we should just adopt the “state-as-probability” view, and thus get rid of the quantum weirdness. But This doesn’t work, for just as the “state-as-physical” view struggles to make sense of **macro**scopic superpositions, so the “state-as-probability” view founders on **micro**scopic superpositions.

Consider, for instance, a very simple interference experiment. We split a laser beam into two beams (Beam 1 and Beam 2, say) with a half-silvered mirror. We bring the beams back together at another such mirror and allow them to interfere. The resultant light ends up being split between (say) Output Path A and Output Path B, and we see how much light ends up at each. It’s well known that we can tune the two beams to get any result we like – all the light at A, all of it at B, or anything in between. It’s also well known that if we block one of the beams, we always get the same result – half the light at A, half the light at B. And finally, it’s well known that these results persist even if we turn the laser so far down that only one photon passes through at a time.

According to quantum mechanics, we should represent the state of each photon, as it passes through the system, as a superposition of “photon in Beam 1” and “Photon in Beam 2”. According to the “state as physical” view, this is just a strange kind of non-local state a photon is. But on the “state as probability” view, it seems to be shorthand for “the photon is either in beam 1 or beam 2, with equal probability of each”. And that can’t be correct. For if the photon is in beam 1 (and so, according to quantum physics, described by a non-superposition state, or at least not by a superposition of beam states) we know we get result A half the time, result B half the time. And if the photon is in beam 2, we **also** know that we get result A half the time, result B half the time. So **whichever** beam it’s in, we should get result A half the time and result B half the time. And of course, we don’t. So, just by elementary reasoning – I haven’t even had to talk about probabilities – we seem to rule out the “state-as-probability” rule.

Indeed, we seem to be able to see, pretty directly, that *something* goes down each beam. If I insert an appropriate phase factor into one of the beams – *either* one of the beams – I can change things from “every photon ends up at A” to “every photon ends up at B”. In other words, things happening to either beam affect physical outcomes. It’s hard at best to see how to make sense of this unless both beams are being probed by physical “stuff” on *every* run of the experiment. That seems pretty definitively to support the idea that the superposition is somehow physical.

There’s an interesting way of getting around the problem. We could just say that my “elementary reasoning” doesn’t actually apply to quantum theory – it’s a holdover of old, bad, classical ways of thinking about the world. We might, for instance, say that the kind of either-this-thing-happens-or-that-thing-does reasoning I was using above isn’t applicable to quantum systems. (Tom Banks, in his post, says pretty much exactly this.)

There are various ways of saying what’s problematic with this, but here’s a simple one. To make this kind of claim is to say that the “probabilities” of quantum theory don’t obey all of the rules of probability. But in that case, what makes us think that they **are** probabilities? They can’t be relative frequencies, for instance: it can’t be that 50% of the photons go down the left branch and 50% go down the right branch. Nor can they quantify our ignorance of which branch it goes down – because we don’t need to know which branch it goes down to know what it will do next. So to call the numbers in the superposition “probabilities” is question-begging. Better to give them their own name, and fortunately, quantum mechanics has already given us a name: *amplitudes*.

But once we make this move, we’ve lost everything distinctive about the “state-as-probability” view. *Everyone* agrees that according to quantum theory, the photon has some amplitude of being in beam A and some amplitude of being in beam B (and, indeed, that the cat has some amplitude of being alive and some amplitude of being dead); the question is, what does that mean? The “state-as-probability” view was supposed to answer, simply: it means that we don’t know everything about the photon’s (or the cat’s) state; but that now seems to have been lost. And the earlier argument that *something* goes down both beams remains unscathed.

Now, I’ve considered only the most straightforward kind of state-as-probability view you can think of – a view which I think is pretty decisively refuted by the facts. It’s possible to imagine subtler probabilistic theories – maybe the quantum state isn’t about the probabilities of each term in the superposition, but it’s still about the probabilities of *something*. But people’s expectations have generally been that the ubiquity of interference effects makes that hard to sustain, and a succession of mathematical results – from classic results like the Bell-Kochen-Specker theorem, to cutting-edge results like the recent theorem by Pusey, Barrett and Rudolph – have supported that expectation.

In fact, only one currently-discussed state-as-probability theory seems even half-way viable: the probabilities aren’t the probability of anything objective, they’re just the probabilities of measurement outcomes. Quantum theory, in other words, isn’t a theory that tells us about the world: it’s just a tool to predict the results of experiment. Views like this – which philosophers call *instrumentalist* – are often adopted as fall-back positions by physicists defending state-as-probability takes on quantum mechanics: Tom Banks, for instance, does exactly this in the last paragraph of his blog entry.

There’s nothing particularly quantum-mechanical about instrumentalism. It has a long and rather sorry philosophical history: most contemporary philosophers of science regard it as fairly conclusively refuted. But I think it’s easier to see what’s wrong with it just by noticing that real science just isn’t like this. According to instrumentalism, palaeontologists talk about dinosaurs so they can understand fossils, astrophysicists talk about stars so they can understand photoplates, virologists talk about viruses so they can understand NMR instruments, and particle physicists talk about the Higgs Boson so they can understand the LHC. In each case, it’s quite clear that instrumentalism is the wrong way around. Science is not “about” experiments; science is about the world, and experiments are part of its toolkit.

Chris, are you aware of the critiques of MWI etc. such as I gave above? It disappoints me that you started off IMHO on the right track in your first paragraph, then accepted the decoherence explanation of “why we don’t see” the different possibilities together. Well, you already said it: decoherence just produces a superposed cat you can’t do interference experiments with. So what? I should still “find” both, just not the sort of pattern that constitutes evident “interference.”

Amplitudes still combine and add up, they just wouldn’t make certain pretty distributions that prove interference in the particular case. Without some “intervention” the waves just “are” and no more diverted into statistics or isolation of components than in classical EM. And if they “aren’t waves” – then what are they in transit, making the interference?

Either we can literally detect the amplitude themselves, in which case we just find messier distributions after loss of interference, or: there is some breakdown by something (I think, maybe atoms “grab” energy or particles just do what they do in a probabilistic manner, once interactions get going) to get the “statistics.” Remember, the “statistics” have to be explained, they can’t go in by circular argument/definition as per the density matrix and the confused idea @15 that things are just fine as is.

What would such a cat look like? You’re thinking of, like, two partially transparent cats overlayed?

What actually happens (in this picture) is that you end up with two states of your brain, one that’s seen a live cat and one dead, superposed.

Perhaps the mystery is supposed to be why you can’t feel that. But, on the basis that consciousness isn’t magic (no matter how much it currently seems like it), and is somehow constituted out of physical stuff, don’t you just expect to have two different thought processes/feelings overlapped but non-interacting? There’s no way to prove to yourself that you’re thinking two contradictory things, because you can’t do any interference experiment.

Criticism of conservation laws: there’s still only one wavefunction, still correctly normalized. As more and more things happen, the weight of any one particular combination gets diluted further and further, but it’s not like this actually causes any problems.

Besides, what if there was violation? We just made the conservation laws up, they don’t have to be fundamental.

If the concept of a holographic universe actually describes the real world, then all matter would exist as two-dimensional spread-out overlapping and interfering wave patterns. Matter would appear to be three-dimensional localized particles only when we observe it. Interesting how much this sounds like quantum behavior.

Is there a higher dimensional analog? Could reality exist as a three-spatial-dimensional hologram, but we perceive it as four dimensional spacetime? Perhaps what we perceive as time is collapse of the multi-dimensional wave function.

Would the problem of non-locality and spooky action at a distance also go away if matter were a spread-out wave pattern? The smaller a particle, the larger and more spread out it’s wave pattern would be.

Of course I could be completely wrong.

Tom Banks,

I’m disappointed to read that you think that “elementary reasoning” is a flawed strategy, to be replaced by mathematics. As you correctly write, mathematics is just a tool. For mathematics to build a world-picture like the one described by quantum theory, you need more than just the ability to solve equations. You need to know which equations, why, and how they relate to the world you see. For that, you really have nothing BUT “elementary reasoning” to go on.

If, however, you mean common sense or intuition, I completely agree. There is very little intuitive about quantum mechanics. But that is exactly why there is simply no way that we could have developed it without some rather uncommon reasoning about elementary notions.

The recent paper by Pusey et al is interesting, but I am afraid it is based on one more assumption that just realism. I don’t know if philosophers have a name for it but the assumption is: states exist at any given time and for any given time, there is a complete description of the state at that instant determining all future outcomes. Let me try to spell it out a bit. You specify any time of your choice t. Then, the probabilities of any outcome to the future of t can be determined based solely upon the complete set of information describing the state at t, and the probabilities are uniquely determined.

There are retrocausal interpretations where this assumption breaks down. Some of them can also be realist. Any outcome is determined by an alternating web of forward causality and retrocausality, and history has to be considered holistically in time. In such interpretations, the wave function need not exist.

Chris #25– That’s a common but erroneous assumption based on classical intuition. Sure, when a measurement apparatus (and unavoidably the larger environment) comes into contact with the cat, it all “joins” the superposition.

But the density matrix for the cat is now diagonal in its own classical basis, with eigenvectors corresponding to alive or dead. The cat itself can be said to have one of those two states, with probabilities encoded as the eigenvalues of the density matrix, and not both.

And the density matrix for the measurement apparatus is likewise diagonal in its classical basis, with eigenvectors corresponding to having observed a live cat or a dead cat, and magically with the same probability eigenvalues as the cat.

And the density matrix for the local environment likewise diagonal in its classical basis, etc.

So who’s not diagonal in the classical basis? A very large “bubble” expanding outward at essentially the speed of light due to thermally (and thermodynamically-irreversibly) radiated photons.

At any given instant, we can define a bubble system that has not yet decohered to a classical diagonalizing basis for its own density matrix. The state of that instantaneous bubble system is simply its own state vector, full stop. What’s the meaning of that state vector? I don’t know. It simply doesn’t have certain definite classical properties yet, until it decoheres itself a fraction of an instant later. But to say that it describes multiple well-defined classical “many worlds” is a huge and unfounded additional assumption.

But all of this is beside the point: The cat is not the bubble! Neither is the measurement device. The cat is the cat, and the measurement device is the measurement device, and the local environment is the local environment, and when you ask what’s the state of these systems, their density matrices give you classical probability distributions over classical states.

The confusion arises when we forget which system we’re talking about. In quantum mechanics, you have to be specific. Just as space and time go from being absolute and universally-agreed-upon to relative and observer-dependent when we replace Newtonian physics with relativity, so too does ontology go from being globally-defined to being locally-defined when we go from classical mechanics to quantum mechanics.

If you ask for the state of some other system, namely, the bubble, then you may not find that it likewise looks classical yet. But that’s not the system we’re talking about — we’re talking about the cat, or the measurement device. The bubble is the wrong system.

When I referred to naive classical intuition, I was speaking about precisely this point. Classically, a system is just the sum of its individual parts. But in quantum mechanics, what’s true for the parts — namely, that they each have well-defined classical probability distributions — need not be true for the whole. To naively assume otherwise is to commit the classic fallacy of composition/division.

That fallacious assumption is based on classical logic (itself based on what’s useful for our Darwinian-evolved human brains to understand), and there’s no reason whatsoever why it must be true in a quantum world. And giving up that often-unexamined assumption gives rise to no contradictions with observation or experiment. It’s just another piece of classical intuition that doesn’t reflect the fundamental nature of reality.

When you give it up, the troubles go away, and you don’t need many-worlds. So that’s why I’m confused that people are still arguing over all of this.

“It has been asserted that metaphysical speculation is a thing of the past, and that physical science has extirpated it. The discussion of the categories of existence, however, does not appear to be in danger of coming to an end in our time, and the exercise of speculation continues as fascinating to every fresh mind as it was in the days of Thales.”

James Clerk Maxwell 1871

@Matt: where on earth are you getting probabilities from? The cat+universe system _is_ still in a superposition of two states, the linearity of the equations guarantees it.

I’m not an expert on these fancy density matrix thingumies, but it looks like you put in probabilities by hand at the start, to account for initial conditions that are only uncertain in a classical sense. The only true probabilities you can get out at the end must derive from those, the rest is just ratios of squared amplitudes, the interpretation of which is the whole point at issue.

Chris #33– Where do we get probabilities from in classical mechanics? In classical mechanics, the basic objects are classical probability distributions over classical states, the latter of which we can regard as elements of a classical configuration space. That’s the axiomatic structure of classical mechanics, and everything else follows from that plus equations of motion.

In quantum mechanics, classical probability distributions are replaced by density matrices, whose eigenvalues are now the probabilities over quantum states, the latter of which are elements of a complex vector space. That’s the axiomatic assumption of quantum mechanics, and everything else follows from that plus equations of motion.

Now, turning to your next point, the cat+universe system is difficult to define in general — we don’t even know if there’s a well-defined, closed “universe” system in the first place. (Eternal inflation in particular makes things tricky, although Tom Banks has views on that.) But at any given instant, there is indeed a huge “bubble” system containing the cat that still has a definite state vector with all the superpositions.

But who cares? That’s not the cat. That’s some other system, and its state is just what it is, some weird, nonclassical-looking state — with no reason to assume its elements in a classical basis are to be regarded as classical, reified “many worlds” — at least briefly until the bubble system, too, decoheres due to irreversible thermal radiation into the even-larger environment.

The idea that the cat’s status and the bubble’s status have to have some sort of naive classical agreement is just the classical fallacy of division/composition I mentioned before. The axiomatic structure of quantum mechanics I mentioned before — a structure with a natural analogy to the axiomatic structure of classical mechanics — implies that the cat has a classical probability distribution over its two classical alive and dead states. And there are absolutely no contradictions with experiment or observation is taking that view, so why not?

The cat is not the universe, my friend.

I think I’d dispute your characterization of classical mechanics. Usually one says that the classical state follows a single sharp path through phase space. Uncertainties in the initial conditions and the introduction of probability distributions is something you can layer on top if you enjoy complications.

I think I might understand your ontology better now (though I still don’t think I like it). It seems like it has to be fundamentally based on baking probability distributions in at the start. Layering them on top, like I would have described classical mechanics, isn’t going to work.

We can solve the lack of definedness of the cat+universe system easily enough by climbing inside another, bigger, box before we open the cat’s one. Any interpretation that requires that the universe is finite/infinite open/closed eternally inflating/heading for a big crunch definitely needs to mention that up-front.

All this talk of bubbles, and my box-within-a-box sounds like Wigner’s friend. How would you describe that experiment? Does the description depend on who you ask and when?

Chris #35– I think you do indeed understand my ontology better now, and yes, you don’t have to like it if you don’t want to.

And I agree that it is fundamentally based on baking probability distributions in at the start — by design. After all, probability is going to be in quantum mechanics whether we like it or not, unlike in classical mechanics (unless we work with chaotic classical systems, or classical systems with Langevin dynamics). And deriving probability without putting it in at the start is staggeringly difficult, in large part because there presently exists no agreed-upon, rigorous definition for what probability means.

Indeed, many disputes over the meaning of quantum mechanics are actually disputes in disguise over the meaning of probability.

So rather than demanding that quantum mechanics somehow solve all our debates and problems with defining the meaning of probability by generating a notion of probability from scratch, it’s wiser to bake probability in from the start. And, amazingly, it works!

The present interpretation doesn’t require the universe to be open or eternally inflating. The point is that it is agnostic about the global nature of the universe — it doesn’t depend on the universe needing to be a closed, well-defined system.

Wigner’s friend is a great way to understand the ontology of this interpretation. Suppose Wigner is really far away from Earth — Alpha Centauri, say, which is about four light years away. His friend on Earth does the experiment on the cat. Almost instantaneously the whole Earth has decohered to the definite alive or dead result — the speed of light is really fast, after all, and it’s essentially impossible to prevent thermally-radiated photons from triggering decoherence.

But poor Wigner, meanwhile, will have to wait at least four years before he can join the decoherence party. If he were to compute the cat’s density matrix right away, or his friend’s density matrix, or even Earth’s density matrix, then he’d find a classical mixed density matrix over classical alive-or-dead states. (We’re assuming the friend established in advance when the experiment would take place, since Wigner is four light-years away and can’t see the experiment happen in real time.)

But if Wigner computes the density matrix of the giant instantaneous bubble system whose radius is larger than the number of light-seconds since the cat decohered, then that bubble system is still a pure state as far as the cat experiment is concerned. But so what? The bubble is not the cat. And either way, the bubble is still outside Wigner’s light cone, so it makes no observable difference what it’s state is. The moment the bubble is large enough to enter his own light cone (after four years have passed), he decoheres with it.

Is it a problem that before this moment, some big bubble system outside Wigner’s light cone is still not in a classical state? Why should we care? It’s only troubling if we insist on a classical notion of how systems and their subsystem must have a classically-consistent ontology.

The whole time Wigner can assign a definite classical ontology to the cat and even the Earth, just not to this large bubble system, at least until it reaches him. So, in that sense, the description of the cat doesn’t depend on who you ask and when.

This post seems to share much of common the confusion about QM, and it all starts with one bad assumption: “the wave function collapses, and as a consequence we see classical macro states. Therefore we need to explain what “collapse” means, and what the wave function means before and after.”

I’m not a practicing physicist, but as far as I can tell this whole line of reasoning is nonsense, because it falls at the first hurdle. Wave functions do not collapse. Ever. There’s nothing in the equation that corresponds to “collapse”, which is why we run into trouble trying to find a “measurement” that causes a “collapse”: none of these concepts are anywhere defined!

So the question arises: why do we always find classical macro states rather than dead/alive cats? The answer is, we don’t. This question is itself founded in a bad assumption that follows from the “collapse” error. That error is the assumption that if the initial system, a decaying nucleus, is in a superposition of very distant states (Decayed or Not Decayed), then as that superposition propagates to larger and larger systems, those systems are also in equally distant states until you get to a dead/alive cat. At this point you need to invent all the philosophical baggage of measurement, collapse, probability, etc.

What really [for some value of “real”] happens is this: You start with a system of one particle whose states are widely separated (decayed or not, left slit or right slit, …). As that particle interacts with more particles and their states become entangled, you get a system that is still a superposition of states, but less distant — the all-or-nothing of the initial state does not “infect” the larger system. That system in turn entangles with more particles, and so on until you get to something we call macroscopic, such as a cat. The cat is still in a superposition of states, but we don’t see that because it is in many states, all of which are so similar it would take many times the lifetime of the universe to distinguish them. And of course our brains are in the same situation since they are within the systems they observe.

Incidentally, I *think* this is a lay and informal explanation (or “story”, to borrow Tom’s term) of what Tom and Chris are saying more rigorously… but I hesitate to impute my words to their opinions, so if the above is wrong, it’s my fault, not theirs!

Iodine molecules (I2) show interference in a “two-slit” experiment. That is a rather large mass (atomic mass 254) about which to not be able to say it is either here or there.

@Carl: but all the equations are linear. Which means the cat you get from a superposed nucleus is exactly the cat you get from a decaying nucleus (dead) plus the cat you get from a non-decaying nucleus (alive) in some ratio (and I guess probably with a phase factor, whatever that means at macroscopic scale).

Nonlinear equations can have the states get closer as you go up in scale, or appear to pick (mostly) one. Linear equations can’t do that, they maintain the superposition forever. Decoherence lets you prevent them ever interfering again, but the superposition is still there.

In defending many-worlds I’m accepting the reality of that superposition. (While secretly hoping we can instead figure out how to smuggle in a non-linear component that doesn’t damage anything we know, but achieves the right appearance of collapse).

I really don’t know what Tom is arguing. If it’s the same as Matt, then I think it starts from being much less certain that the initial superposed nucleus state “really exists”. After all, if we were to go and look at it it would be one or the other. And we know how to calculate the chances, in an admittedly slightly counterintuitive way, so just define that to be how to world works, and stop worrying about it. (Apologies for the likely gross oversimplification.)

Prof. Banks seems to be talking about retrocausality with a straight face. Doesn’t that tell you something about the real complexities of his interpretation? Could they possibly come from trying to insist that many worlds is really different from de Broglie/Bohm from Copenhagen? It seems to me that if the mathematics gives the same results then the theories are identical, and distinguishing Everett and Bohm is as arbitrary as distinguishing Heisenberg, Schroedinger and Feynman.

Prof. Banks seems to have some sort of fetish about the impossibility of the human mind to grasp the subtleties of true Reality. It seems his commitment to instrumentalism comes from the belief that the true Reality is forever hidden from apprehension of the intellect. Isn’t this good old Kantianism? It seems to me sufficient to define the Ding by what it does in its interactions and by its development in a changing system, and let the an sich and an fuer fend for themselves. This may be quasi-Hegelian in approach, but reviving Kantianism seems to have a great deal to do with philosophical prejudices rather than science. Isn’t the more novel approach more likely to be fruitful?

If an electron is fired at a slit, there is a minuscule but calculable probability that a virtual electron will take a different path. Since all electrons are identical, there is no distinguishing this virtual electron from “the” electron that was fired. The virtual electron can go through one slit and another indistinguishably virtual electron goes through another slit. Except of course there are many, many virtuals electrons which will for minuscule but calculaable moments affect each other by their electric charges, i.e., interfere. After a period of time, the sampling of the wave of virtual electrons by a fluorescent screen reveals the pattern. The real question is why any virtual electron’s interaction with the screen must take the form of a particle interaction rather than a wave. This is because of conservation of energy and angualar momentum, I should think. Prof. Banks can say that the wave of virtual electrons is not real, more or less by definition, if he wishes, as there is no acorn-like real electron. But the whole wave of virtual electrons seems to be thoroughly described by the wave function.

Declaring the law of the excluded middle to be invalid doesn’t seem to be classical logic at all. Even ignoring the questions of philosophical precedent, it seems that if you posit static metaphysical entities, then the law of the excluded middle is prereqisite for any rationally intelligible universe or for discourse about any conceivable universe. Aren’t we talking dialectics here? Except without any nasty materialism?

Chris #39 — I very much appreciate your gracious apology for oversimplifying.

I’m certainly not saying that I doubt the “initial superposed nucleus state ‘really exists’.” The nucleus’s initial quantum state really exists, and could be described in the decayed/undecayed basis as a superposition of decayed and undecayed states. That doesn’t in any way conflict with what I’ve been saying. When the nucleus decays and starts interacting with the cat, and then eventually the measurement device and the rest of the environment, then the nucleus’s density matrix evolves from a trivial pure state (the linear superposition of decayed and undecayed) to a nontrivial mixed density matrix with 50/50 probability eigenvalues over the decayed state and the undecayed state.

If you want to call that “collapse”, then fine: The state of the nucleus did indeed change — it was initially in a pure-state superposition, and its final state is one of two definite states with classical probabilities 50/50.

Indeed, if you consider the density matrix of the nucleus from the initial time to the final time, and smoothly track how it evolves in time, you can see how it smoothly evolves from the original pure-state superposition to the mixture of two definite decayed and undecayed states — the “collapse” isn’t instantaneous, and you can even predict how long the collapse should take. (You’ll see that it’s very fast — indeed, exponentially fast in the number of degrees of freedom — but, again, not instantaneous.)

How did the nucleus go from pure to mixed under linear time evolution? Well, it wasn’t linear time evolution, because the system is open — an open system evolves according to a (generally) nonlinear equation known as the Lindblad equation (http://en.wikipedia.org/wiki/Lindblad_equation). Systems that are measured are not closed, so there’s your nonlinearity.

And if you stop and think about it, you realize that no systems every really evolve linearly at all, because there are no perfectly closed systems in nature — certainly no macroscopic systems, except maybe the whole universe (if it’s both well-defined and closed, which are both far from obvious). All systems really evolve according to a nonlinear Lindblad equation, with the linear Schrodinger equation just a useful approximation when the system can be approximated as closed.

Now, again, you might argue that some huge “bubble” system (say, the whole universe if you like) enclosing everything still somehow exhibits the original nucleus (or cat) superposition, but who cares? One would only care if one demanded that the nucleus’s ontology (or the cat’s) must have a classical notion of agreement with the ontology of the bubble. But giving up that subtle assumption gives rise to no conflicts with experiment, so why keep it?

But, we’re talking matters of principle, not practicality.

If it helps, I am considering the state of the entire, well-defined, closed universe, sufficiently long after the cat experiment that everything has been in causal contact with it. I reserve the right to loosen those assumptions later 🙂 but right now they give the clearest indication of where the problem is.

By dividing the universe into the quantum system, which does include superpositions, and the environment, which is where you obtain the probabilities from, it seems like you’re just back to the old arbitrary Copenhagen divide between the quantum microworld and the classical macroworld.

When I first heard about decoherence, I thought that was the mistake it was making. But without that separation it still manages to tell you why you lose the interference effects, it just no longer can reproduce a discrete probabilistic choice between two alternatives.

It seems pretty likely we’re not going to end up agreeing (or even maybe understanding each other)…

I would very much like someone to do Penrose’s space-based laser cantilever experiment (which I don’t have a reference for, sorry) at undoubtedly extortionate cost. If one can calculate a rate for the decoherence/”collapse” of a macroscopic oscillator, it’s certainly different from his prediction (which involves G) and probably the predictions of other theories.

Pingback: Enlaces (20/Noviembre/2011)

We do not have appropriate mental pictures of what things are really like.

For instance, although it may make sense to think about point-like particles and individual waves in themselves, all the phenomena that we can experience for ourselves are collective. A grain of sand is point-like because we imagine it to be so and we can make it behave in a point-like way. That picture, however, breaks down on a finer scale. A water wave appears to be a collective long-range phenomenon despite it being composed of molecules whose motion is not so wavy. Whatever the components of reality are, they resolve themselves into known “collective” behaviors when we measure them. If you are not happy with me calling “point-like” collective, I could say “clumpy” instead. A particle is stuff that is really really clumpy, whereas a plane wave is the least clumpy thing we can imagine except for nothing at all. A wave packet is moderately clumpy, and it resolves itself into either form depending on the clumpiness of the measuring device.

You could say that it is an error to draw an analogy to, on the one hand, how one picture breaks down at some scale and has to be replaced by another picture, and on the other hand, how logic itself seems to break down when it faces the measurement problem.

But our imaginations are subject to clumpiness as well. Since humans are very clumpy things, we can’t really imagine ourselves taking both paths when the road forks. We can only fancifully imagine what what it would be like to time travel or have multiple mes and the like. But in acknowledging that is a failing of our imagination doesn’t mean that it does not happen. We are way too clumpy to imagine properly what it would be like to be a wave packet.

You might think that our restricted view on things means that we can’t get all of the qualities of the information. That may or may not be true. What it does mean is that the categories that we have for sorting phenomena are never quite satisfactory. It could be that 1) the boxes are collectively too small and we will never know what we are missing. Or it could be that 2) they are collectively too large because the idea of a wave and the idea of a particle are too complicated to apply in every sense to something like an electron.

So in which of my pictures is the wave function real? In which is it imaginary?

Chris #42– I think we’re indeed talking past each other. Let me make this all simpler.

Ask me the state of a particular system. I compute its density matrix, and interpret the eigenvalues as probabilities over the definite eigen-state-vectors.

That’s my basic axiom for quantum mechanics — I am not attempting to derive this axiom from anything else! If there are any other axioms, I throw those out. This is my axiom, and I want to know if it conflicts with observation or experiment in any way.

I can then regard the actual state of the system as being on one of those eigen-state-vectors, with probability given the corresponding eigenvalue. All of this is in perfect analogy with the classical case of a classical probability distribution over classical states.

Now, in quantum mechanics, there may be a bigger, enclosing “super-system” whose own density matrix looks very different from that of our smaller, original system contained therein — maybe the original system has a classical density matrix over classical-looking states (alive or dead cat, for example), and the super-system is in some bizarre pure superposition state. But I don’t care, because that super-system is not the cat — indeed, the supersystem is humongous, being R light-seconds in radius after R seconds.

Where’s the confusion? So what if the super-system is in a weird state? It’s not the cat. It’s not even Earth anymore, since Earth is a tiny fraction of a light-second in size.

Your confusion seems to be all about the super-system being still in a superposition. I don’t see why that matters unless you decide to make it matter. In particular, I do not try to interpret in any classical sense what the still-superposed quantum state of the super-system means. I don’t say it’s a many-worlds set of classical copies of anything. It’s just what it is, some weird state that doesn’t have well-defined values of certain classical properties. But since it’s not the cat, or even Earth — both of whose density matrices describe a classical ontology with respect to the life or death of the cat — I don’t really care what the state of the super-system is.

This is not Copenhagen. There’s no fixed Heisenberg cut dividing “small” quantum systems on one side from “big” classical system on the other. Quantum mechanics applies to all systems here. Every system has a density matrix that we can compute, evolving according to a particular Lindblad equation, and that tells us what we need to know about that particular system’s ontology, but not about the ontology of other systems, even if those other systems enclose the system we’re focusing on.

One thinks of quantum mechanics in this picture as a huge number of systems, all with their own local density matrices and their own resulting local ontologies, interacting with one another and decohering against one another.

You can fix your sights on a particular system, compute its density matrix (which describes the local ontology of just that system), and then follow the evolution of that density matrix through time according to its Lindblad equation. If the system is very tiny and well-isolated, the density matrix will often look very quantum-mechanical, in the sense that its eigenvectors are often weird quantum superpositions without sharp labels corresponding to classical features, whereas if the system is fairly big and in frequent contact with a larger environment, it will almost always look very classical, in the sense that its eigenvectors remain sharp Gaussians in position and momentum space.

“science is about the world” is wrong, and why science has become a hubris-filled fundamentalist field …

it is merely a method, and NOT the only one, for exploring existence.

“Is the wave function real/physical, or is it merely a way to calculate probabilities?”

The map is not the territory.

Wave function is a tool used to calculate probabilities. It is real only in the sense that as long as those probabilities correctly reproduce results of experiments some relevant aspect of physical reality which governs those experiments is successfully captured by this abstract construct.

Wave function is an abstract mathematical description of reality, but the fact that this description is successful doesn’t mean that physical reality is made of wave functions (just as the fact that words can be used to describe reality doesn’t mean reality is made of words).

It also doesn’t mean that better descriptions of reality are not possible although it does constrain them.

Probably my failure to get how any of this can be self-consistent is the absence of any definition of what the probabilities mean. There may well be philosophical dragons there, but when your world is based quite so fundamentally on them they probably need addressing.

I thought I could describe how this works in many worlds. But I can’t convince myself the algebra is going to work to get me amplitude squared.

What interpretation do you put on the partway decohered state? It’s not completely a probability yet. Probably none: you’ll just tell me to wait for it to sort itself out, and then it will be a probability. But as a matter of principle, it never quite exactly is, no matter how many exponentials become involved.

In classical mechanics I can stop the experiment whenever I want, take the probabilities as they stand, split out some cases separately, continue the calculation, and combine at the end. With these half-decohered objects that won’t be true. So the eigenvalues are only the “probability” for “things” to “happen” when you wait long enough before you ask. With classical probabilities you can slice as finely as you want.

In the EPR experiment, I’m guessing that the two observers don’t get to have definite corresponding results (they have a superposition of all the corresponding results) until their future light cones touch? And then, only for people in that overlap? Which of course, includes them by the time they meet up to compare results…

Chris #48– You write “Probably my failure to get how any of this can be self-consistent is the absence of any definition of what the probabilities mean. There may well be philosophical dragons there, but when your world is based quite so fundamentally on them they probably need addressing.”

I don’t know what probabilities mean, at least rigorously. Nobody does. So I simply take them for granted as primitive, fundamental entities. If there are any lingering troubles with the formulation of quantum mechanics that results, then that’s not the fault of quantum mechanics, but of our limited understanding of the meaning of probability itself.

So in the axiomatic structure of the formulation of quantum mechanics I’ve been describing, avoiding the question of what probabilities actually mean is a feature, not a bug. It explicitly separates questions about the meaning of probability from questions about quantum mechanics.

Without a rigorous definition of probability, it’s a fool’s errand to go looking for it in a particular formulation of quantum mechanics. The approach I’ve been describing avoids asking quantum mechanics to somehow generate a rigorous concept of probability from scratch — that’s the problem many-worlders unavoidably have. Just put probability into the theory from the beginning, and leave till later someone coming along (a philosopher?) to define what probability means.

You also write “What interpretation do you put on the partway decohered state? It’s not completely a probability yet. Probably none: you’ll just tell me to wait for it to sort itself out, and then it will be a probability. But as a matter of principle, it never quite exactly is, no matter how many exponentials become involved.”

That’s not quite true. At every instant of time, the density matrix is always a Hermitian, positive-definite, trace-one matrix, so you can always diagonalize it, and its eigenvalues are always real, positive, and sum to one. So the eigenvalues always have the interpretation of probabilities, both before, during, and after decoherence. The question now is what states those are probabilities for.

Before decoherence, of course, the system being studied has a trivial density matrix with a single, unit eigenvalue, so the system has probability one associated to that single superposition-state. But during and after decoherence, the system’s density matrix becomes nontrivial, and develops a nontrivial set of probabilities.

In addition to the actual eigenvalues smoothly (but exponentially-quickly) changing, the eigen-state-vectors are also smoothly (but exponentially-quickly) changing. In the pre-decoherence state, the single nontrivial eigen-state-vector is a weird quantum-like superposition-state. During the brief time during decoherence, the new eigen-state-vectors look less and less quantum and more and more classical, until at the end, they look exponentially close to being classical states.

But that’s good enough, unless you want to claim that classical states are measure-zero elements of the Hilbert space. A classical state is one that looks approximately Gaussian in both position- and momentum-space, and there’s always going to be some wiggle room in that definition — plenty to accommodate the above-description of the decoherence process.

Our growth and becoming other than we are — in conjunction with instantaneity — is fundamental to any truly unified or complete explanation, description, or understanding of physics. This is certainly an ONGOING/CONNECTED AND FUNDAMENTAL EXPERIENCE, as we typically continue living and growing at/from the time of our birth. So, for example, how is it that we are born “ready in advance”.

Strange how modern physics has basically or entirely neglected this.

It is impossible to fully and properly understand physics AND thought apart from ALL DIRECT (pertinent, related, and significant) bodily/sensory experience.

Would anyone seriously state that the shifting and variable nature of thought is entirely separate from the forces/energy of physics (including quantum mechanical)?