Guest Post: David Wallace on the Physicality of the Quantum State

The question of the day seems to be, “Is the wave function real/physical, or is it merely a way to calculate probabilities?” This issue plays a big role in Tom Banks’s guest post (he’s on the “useful but not real” side), and there is an interesting new paper by Pusey, Barrett, and Rudolph that claims to demonstrate that you can’t simply treat the quantum state as a probability calculator. I haven’t gone through the paper yet, but it’s getting positive reviews. I’m a “realist” myself, as I think the best definition of “real” is “plays a crucial role in a successful model of reality,” and the quantum wave function certainly qualifies.

To help understand the lay of the land, we’re very happy to host this guest post by David Wallace, a philosopher of science at Oxford. David has been one of the leaders in trying to make sense of the many-worlds interpretation of quantum mechanics, in particular the knotty problem of how to get the Born rule (“the wave function squared is the probability”) out of the this formalism. He was also a participant at our recent time conference, and the co-star of one of the videos I posted. He’s a very clear writer, and I think interested parties will get a lot out of reading this.

———————————-

Why the quantum state isn’t (straightforwardly) probabilistic

In quantum mechanics, we routinely talk about so-called “superposition states” – both at the microscopic level (“the state of the electron is a superposition of spin-up and spin-down”) and, at least in foundations of physics, at the macroscopic level (“the state of Schrodinger’s cat is a superposition of alive and dead”). Rather a large fraction of the “problem of measurement” is the problem of making sense of these superposition states, and there are basically two views. On the first (“state as physical”), the state of a physical system tells us what that system is actually, physically, like, and from that point of view, Schrodinger’s cat is seriously weird. What does it even mean to say that the cat is both alive and dead? And, if cats can be alive and dead at the same time, how come when we look at them we only see definitely-alive cats or definitely-dead cats? We can try to answer the second question by invoking some mysterious new dynamical process – a “collapse of the wave function” whereby the act of looking at half-alive, half-dead cats magically causes them to jump into alive-cat or dead-cat states – but a physical process which depends for its action on “observations”, “measurements”, even “consciousness”, doesn’t seem scientifically reputable. So people who accept the “state-as-physical” view are generally led either to try to make sense of quantum theory without collapses (that leads you to something like Everett’s many-worlds theory), or to modify or augment quantum theory so as to replace it with something scientifically less problematic.

On the second view, (“state as probability”), Schrodinger’s cat is totally unmysterious. When we say “the state of the cat is half alive, half dead”, on this view we just mean “it has a 50% probability of being alive and a 50% probability of being dead”. And the so-called collapse of the wavefunction just corresponds to us looking and finding out which it is. From this point of view, to say that the cat is in a superposition of alive and dead is no more mysterious than to say that Sean is 50% likely to be in his office and 50% likely to be at a conference.

Now, to be sure, probability is a bit philosophically mysterious. It’s not uncontroversial what it means to say that something is 50% likely to be the case. But we have a number of ways of making sense of it, and for all of them, the cat stays unmysterious. For instance, perhaps we mean that if we run the experiment many times (good luck getting that one past PETA), we’ll find that half the cats live, and half of them die. (This is the Frequentist view.) Or perhaps we mean that we, personally, know that that the cat is alive or dead but we don’t know which, and the 50% is a way of quantifying our lack of knowledge. (This is the Bayesian view.) But on either view, the weirdness of the cat still goes away.

So, it’s awfully tempting to say that we should just adopt the “state-as-probability” view, and thus get rid of the quantum weirdness. But This doesn’t work, for just as the “state-as-physical” view struggles to make sense of macroscopic superpositions, so the “state-as-probability” view founders on microscopic superpositions.

Consider, for instance, a very simple interference experiment. We split a laser beam into two beams (Beam 1 and Beam 2, say) with a half-silvered mirror. We bring the beams back together at another such mirror and allow them to interfere. The resultant light ends up being split between (say) Output Path A and Output Path B, and we see how much light ends up at each. It’s well known that we can tune the two beams to get any result we like – all the light at A, all of it at B, or anything in between. It’s also well known that if we block one of the beams, we always get the same result – half the light at A, half the light at B. And finally, it’s well known that these results persist even if we turn the laser so far down that only one photon passes through at a time.

According to quantum mechanics, we should represent the state of each photon, as it passes through the system, as a superposition of “photon in Beam 1” and “Photon in Beam 2”. According to the “state as physical” view, this is just a strange kind of non-local state a photon is. But on the “state as probability” view, it seems to be shorthand for “the photon is either in beam 1 or beam 2, with equal probability of each”. And that can’t be correct. For if the photon is in beam 1 (and so, according to quantum physics, described by a non-superposition state, or at least not by a superposition of beam states) we know we get result A half the time, result B half the time. And if the photon is in beam 2, we also know that we get result A half the time, result B half the time. So whichever beam it’s in, we should get result A half the time and result B half the time. And of course, we don’t. So, just by elementary reasoning – I haven’t even had to talk about probabilities – we seem to rule out the “state-as-probability” rule.

Indeed, we seem to be able to see, pretty directly, that something goes down each beam. If I insert an appropriate phase factor into one of the beams – either one of the beams – I can change things from “every photon ends up at A” to “every photon ends up at B”. In other words, things happening to either beam affect physical outcomes. It’s hard at best to see how to make sense of this unless both beams are being probed by physical “stuff” on every run of the experiment. That seems pretty definitively to support the idea that the superposition is somehow physical.

There’s an interesting way of getting around the problem. We could just say that my “elementary reasoning” doesn’t actually apply to quantum theory – it’s a holdover of old, bad, classical ways of thinking about the world. We might, for instance, say that the kind of either-this-thing-happens-or-that-thing-does reasoning I was using above isn’t applicable to quantum systems. (Tom Banks, in his post, says pretty much exactly this.)

There are various ways of saying what’s problematic with this, but here’s a simple one. To make this kind of claim is to say that the “probabilities” of quantum theory don’t obey all of the rules of probability. But in that case, what makes us think that they are probabilities? They can’t be relative frequencies, for instance: it can’t be that 50% of the photons go down the left branch and 50% go down the right branch. Nor can they quantify our ignorance of which branch it goes down – because we don’t need to know which branch it goes down to know what it will do next. So to call the numbers in the superposition “probabilities” is question-begging. Better to give them their own name, and fortunately, quantum mechanics has already given us a name: amplitudes.

But once we make this move, we’ve lost everything distinctive about the “state-as-probability” view. Everyone agrees that according to quantum theory, the photon has some amplitude of being in beam A and some amplitude of being in beam B (and, indeed, that the cat has some amplitude of being alive and some amplitude of being dead); the question is, what does that mean? The “state-as-probability” view was supposed to answer, simply: it means that we don’t know everything about the photon’s (or the cat’s) state; but that now seems to have been lost. And the earlier argument that something goes down both beams remains unscathed.

Now, I’ve considered only the most straightforward kind of state-as-probability view you can think of – a view which I think is pretty decisively refuted by the facts. It’s possible to imagine subtler probabilistic theories – maybe the quantum state isn’t about the probabilities of each term in the superposition, but it’s still about the probabilities of something. But people’s expectations have generally been that the ubiquity of interference effects makes that hard to sustain, and a succession of mathematical results – from classic results like the Bell-Kochen-Specker theorem, to cutting-edge results like the recent theorem by Pusey, Barrett and Rudolph – have supported that expectation.

In fact, only one currently-discussed state-as-probability theory seems even half-way viable: the probabilities aren’t the probability of anything objective, they’re just the probabilities of measurement outcomes. Quantum theory, in other words, isn’t a theory that tells us about the world: it’s just a tool to predict the results of experiment. Views like this – which philosophers call instrumentalist – are often adopted as fall-back positions by physicists defending state-as-probability takes on quantum mechanics: Tom Banks, for instance, does exactly this in the last paragraph of his blog entry.

There’s nothing particularly quantum-mechanical about instrumentalism. It has a long and rather sorry philosophical history: most contemporary philosophers of science regard it as fairly conclusively refuted. But I think it’s easier to see what’s wrong with it just by noticing that real science just isn’t like this. According to instrumentalism, palaeontologists talk about dinosaurs so they can understand fossils, astrophysicists talk about stars so they can understand photoplates, virologists talk about viruses so they can understand NMR instruments, and particle physicists talk about the Higgs Boson so they can understand the LHC. In each case, it’s quite clear that instrumentalism is the wrong way around. Science is not “about” experiments; science is about the world, and experiments are part of its toolkit.

73 Comments

73 thoughts on “Guest Post: David Wallace on the Physicality of the Quantum State”

  1. Sean has said most of what I’d have said had I kept up with the discussion over the weekend. But let me make a couple of points:

    (1) On Tom Banks’ distinction between mathematics and words. Firstly, I think Sean is right that the distinction cross-classifies physics and philosophy. (And, as it happens, my professional training is mostly in theoretical physics.)

    Secondly, if you’re interested in answering questions as to what’s going on in physics, of course you’re going to use words. Tom’s original post is written largely in English, and that’s not just as a courtesy to less-mathematically-inclined readers: it’s because it isn’t possible to establish the points he wants to make without some verbal reasoning. (It can’t be, after all, because Tom, and Sean, and I all agree about all of the maths.)

    Thirdly, it’s of course possible, to a very large extent, to make progress in physics by mostly doing maths, and just appealing from time to time to tacitly-understood, pretty-reliable ideas about how the maths relates to observational reality. “Shut up and calculate” is the usual, apposite name for this approach, and it’s a perfectly sensible approach for many purposes, but to do it consistently you have to do both parts. If you stop shutting up and try to explain, in words, why other people are wrong, you can’t consistently criticise them for using verbal reasoning in their discussion.

    (2) On decoherence and density operators. Various people point out, quite correctly (though see below), that the interference phenomena that spoil a probability-based approach to quantum mechanics become negligible at the macroscopic scale due to decoherence processes. That means that you can (near enough) get away with treating macroscopic superpositions as probabilistic.But that doesn’t give you a consistent way of thinking about quantum mechanics unless you can (a) treat microscopic superpositions as probabilistic (which, as I argue in the main post, you can’t), or (b) explain why even a macroscopic-level physical state can look like a probabilistic state (which leads you to the Everett interpretation), or (c) explain just when the transition between the two occurs (which means accepting a collapse of the wavefunction).

    For what it’s worth, I agree with Carl @37 that, since there’s nothing in the equations that corresponds to collapse, we should try to avoid introducing it; so, like Sean, I’m led to (b), which (assuming we don’t add hidden variables or suchlike) basically is the Everett interpretation.

    (As a point of interest, that’s not the majority opinion in philosophy of physics: the Everett interpretation is far more popular among physicists than philosophers. The majority of philosophers do want to introduce collapse, or else do want to add hidden variables or suchlike.)

    (3) Even technically speaking, it’s not really true that we can interpret macroscopic states probabilistically and still regard the dynamics as given by the Schrodinger equation. You can do that if you evolve the system forwards in time, but you’ll get the answer wildly wrong if you evolve it backwards in time. (The decoherent-histories framework is explicitly time-directed, if you want to put it that way.) So I don’t think a consistently probabilistic reading of the quantum state of even a macroscopic system is compatible with the idea that the Schrodinger equation is fundamental, or more generally, with the idea that the fundamental equations of physics are time- (or CPT-) invariant.

  2. David Wallace (#51)–

    Another option is not to ascribe a probabilistic interpretation directly to state vectors at all — treating them instead as fundamental, irreducible states — and only ascribe a probabilistic interpretation to density matrices. It’s to say that when a system’s density matrix is nontrivial, then its eigenvalues are your probabilities.

    And, rather miraculously, that’s precisely what happens when you actually model a physical measurement process, with a measurement device and an environment. There’s no need to ascribe a second, additional layer of probabilistic interpretation directly to state vectors — at best, it’s redundant to do so. Indeed, when you attempt to do so, you run into all the problems and contradictions brought up by, among others, Pusey, Barrett, and Rudolph.

    The only remaining question is if we need to worry that some super-big system enclosing our experiment (and enclosing the whole Earth, actually) is still carrying the un-decohered superposition, and the answer is that there’s no deep reason to care, except for classical prejudice and a version of the fallacy of composition/division. If you take a system-centric approach to ontology in quantum mechanics — what is each particular system’s density matrix telling you? — then you never need to worry about what any huge all-encompassing system looks like, let alone ascribe it a “many worlds” interpretation.

    What I don’t understand is why people aren’t turning to the extremely straightforward aforementioned interpretation: One layer of probability, manifested as density matrices, full stop. Could you please enlighten us?

  3. David Wallace asks:

    To make this kind of claim is to say that the “probabilities” of quantum theory don’t obey all of the rules of probability. But in that case, what makes us think that they are probabilities?

    Matt Leifer (unintentionally) replies:

    An argument that I personally find motivating is that quantum theory can be viewed as a noncommutative generalization of classical probability theory, as was first pointed out by von Neumann. My own exposition of this idea is contained in this paper. Even if we don’t always realize it, we are always using this idea whenever we generalize a result from classical to quantum information theory. The idea is so useful, i.e. it has such great explanatory power, that it would be very puzzling if it were a mere accident, but it does appear to be just an accident in most psi-ontic interpretations of quantum theory. For example, try to think about why quantum theory should be formally a generalization of probability theory from a many-worlds point of view.

  4. This exchange is a near-perfect example of why we use mathematics to explain the world: because natural language is a piss-poor alternative.

  5. Matt @52: in a word, entanglement.

    The actual mathematical process by which we’re obtaining density operators from pure states is by letting system A get entangled with system B and then tracing out system B. If systems A and B are spin-half particles and we prepare them in a singlet state, the probabilistic interpretation of the density operators of the separate systems isn’t available because (a) it’s indeterminate what the mixture is supposed to be; (b) more importantly, we know that measurement outcomes on the whole system can’t be represented by classical correlations.

    Now, of course, if system A is Schrodinger’s cat, and system B is its environment, the sort of experiments that show up this problem are (to put it mildly) technically impossible to perform. So we can get away with applying a probability reading there (modulo my worry about time asymmetry previously). But now we’re back to the issue of needing to interpret the state differently at macro- and micro-scales, only this time the problem’s at the level of density operators rather than pure states.

    Aaron F@53: Matt Leifer’s argument is very interesting and elegant and I can’t properly do it justice here; the soundbite answer would be, “why think that a non-commutative generalisation of probability is still probability?” It certainly doesn’t seem to be probability if probability=relative frequency, or probability=Bayesian degree of confidence. (Matt also raises a nice philosophical problem for many-worlds: if the world isn’t really probabilistic, how come there are so many analogies between the mathematical structure of quantum theory, and probability theory. I’m going to pass on that question to avoid derailing the discussion.)

    Mark@54: false dichotomy. Even papers in pure maths and theoretical physics advance their argument for the most part in natural language, albeit using plenty of words that don’t turn up in everyday usage; that’s because communication and reasoning take place in language and maths isn’t a language. (Try saying “natural language is a piss-poor alternative to mathematics” in mathematics.) However, much (most?) of what theoretical physicists use language to talk about is abstract mathematics. If the level of maths on this thread is low, it’s mostly because I (and I think others) are trying to keep the discussion accessible to a wide audience.

  6. David Wallace wrote: Various people point out, quite correctly (though see below), that the interference phenomena that spoil a probability-based approach to quantum mechanics become negligible at the macroscopic scale due to decoherence processes.

    I believe the FM radio wave from the 93.9 FM WNYC radio station is macroscopic and thus “real” without any decoherence of its underlying photons. I believe this wave can still show interference phenomena.

  7. David Wallace #55– Thanks for responding. Some comments:

    For a pair of spin-1/2 particles, the spin-singlet state is a measure-zero phenomenon. More generally, density matrices with degenerate eigenvalues are always measure-zero in the space of all density matrices. If there’s even the slightest deviation, the degeneracy is broken and the problem goes away. Since measure-zero phenomena are essentially unrealizable in practical experiments — there is absolutely no way to get a perfect spin-singlet in any realistic experiment — there’s really no problem here.

    One can show by direct calculation that if the experimental apparatus has many degrees of freedom, then even tiny degeneracy breaking in the subject system becomes highly robust. (This is a point that David Albert misses in an appendix to his book on interpretations of quantum mechanics.)

    If accommodating measure-zero possibilities is nonetheless demanded, then the interpretation is still tenable: The density matrix doesn’t pick out any one preferred diagonalizing ontic basis, but we can just interpret that to mean that the system’s underlying ontic state can be anything in the degenerate eigenspace.

    Your second point, that “we know that measurement outcomes on the whole system can’t be represented by classical correlations,” is less clear to me. I think you mean to say that the apparatus+subject composition system is still in a pure state after the experiment.

    But in any realistic experiment, the environment rapidly decoheres the whole apparatus+subject composite system so that its correlations become classical. Then there’s some super-big system (including the whole Earth after a fraction of a second) that still isn’t classical with respect to the correlations, but again, as I’ve been saying, that’s not the subject system (the cat, or the electron), or the apparatus, or even the Earth.

    Asserting that the super-system must have an ontology that agrees with the ontology of all its subsystems in a classical sense is an unexamined assumption — essentially a form of the fallacy of composition/division — but dropping it doesn’t conflict with any observations or experiments. All observers would agree that the subject system and the apparatus and even the Earth are now well-described by nontrivial density matrices consistent with the experiment, but that the super-system is not.

    Certainly none of this is any weirder than pilot waves or many worlds.

    You’ve been very gracious — would you mind explaining my misunderstanding here?

    Thanks!

  8. Doubter @56: Not all interference phenomena spoil probabilistic interpretations. The radio wave is indeed macroscopic, and indeed you can do interference with it, but that interference consists of macroscopically many identical photons each individually in a superposition, but not entangled with each other. Effectively, the result isn’t a macroscopic superposition: it’s a macroscopically determinate state, no more philosophically problematic than a wave on the lake. The kind of macroscopic superposition that would potentially cause problems for the probability interpretation would be something like a superposition of (all the macroscopically many photons over here) with (all the macroscopically many photons over there). That’s much harder to prepare without decoherence spoiling it.

    Matt @57: Thanks for an interesting response. Here’s an attempt to answer.

    I was probably a bit quick regarding indeterminacy of the density operator as a probabilistic mixture. You’re quite right about degeneracy being a non-issue, of course. Having said that, if a density operator is to be interpreted probabilistically then we can perfectly well consider probability distributions over non-orthogonal states, in which case the problem of indeterminacy recurs.

    Regarding the larger point, look at it this way: the density operators of microscopic systems (like components of singlet states) can’t in general be consistently treated as probabilistic mixtures of pure states, because we can do interference experiments to rule out that interpretation. The density operators of macroscopic systems, agreed, can be. But conceptually, it still seems that a physically-interpreted density operator (the micro sort) is a different kind of thing from a probabilistically-interpreted density operator. So how did the one turn into another? Or put another way, if the physical goings on are determined by the pure state, how did we move from a pure state at the micro level to a probabilistic mixture of pure states at the macro level? Appeal to decoherence (so goes the usual argument) won’t do here, because ultimately we can always include the environment in the process, in which case the dynamics is still unitary.

    I think you’re rejecting that last sentence. Effectively (again: I think!) you’re wanting to draw a principled distinction between open and closed systems, so that when a system (like Earth) really is open, the evolution really should be taken as giving us a probabilistically-interpretable density operator.

    Bell (and most philosophers) would reject that basically on the grounds that it’s imprecise. It relies on a transition from closed to open process that’s by it’s nature not exactly definable. If we really are seeing a change in the fundamental nature of dynamics – from deterministic to indeterministic – we’d better have a precise criterion for it. I suspect you’re not much moved by that objection.

    Let me raise some slightly different-flavoured objections (none are intended to be decisive):

    (1) it’s potentially going to cause trouble for quantum cosmology.
    (2) it’s noteworthy that (at least in the examples I’m aware of) open-system quantum equations are fairly reliably derived by embedding the system in a larger environment (an oscillator or spin bath, say), applying unitary dynamics, and then tracing the environment back out again. (Often we’ll apply the unitary process only for some infinitesimal time, but still.) So the natural reading of the maths does seem to be that the reason the open-system equation applies is that on a larger scale we’ve still got closed-system dynamics.
    (3) It gets in the way of any attempt to understand, rather than postulate, the direction of time. If all dynamics is Schrodinger dynamics, then the underlying dynamics are time-symmetric, and we can ask what additional information (e.g., a low-entropy boundary condition) breaks the symmetry. But the Lindblad equation (say) is explicitly time-irreversible.

    Finally, a quick comment on your last. You say “none of this is any weirder than pilot waves or many worlds”. I actually regard those two possibilities as in very different categories. The pilot wave theory is a modification of the mathematical structure of quantum physics. I think that’s a very bad idea, not because it’s weird but because it’s unmotivated (and because it’s very unclear how to extend it to relativistic field theory, and very clear that any such extension would violate Lorentz covariance). The many-worlds theory, despite the name (cf Chris @25) doesn’t make any alterations to the mathematics; it just takes the “states are physical” assumption and applies it to the Universe as a whole.

  9. I think Dave has drifted from science to philosophy which probably makes for solemn discussion at Oxford high tables but has never contributed much to our empirical knowledge of the world.
    Francis Bacon says “There are also Idols formed by the intercourse and association of men with each other, which I call Idols of the Market-place, on account of the commerce and consort of men there. For it is by discourse that men associate; and words are imposed according to the apprehension of the vulgar. And therefore the ill and unfit choice of words wonderfully obstructs the understanding. Nor do the definitions or explanations wherewith in some things learned men are wont to guard and defend themselves, by any means set the matter right. But words plainly force and overrule the understanding, and throw all into confusion, and lead men away into numberless empty controversies and idle fancies.”

  10. ” The pilot wave theory is a modification of the mathematical structure of quantum physics. I think that’s a very bad idea, not because it’s weird but because it’s unmotivated (and because it’s very unclear how to extend it to relativistic field theory, and very clear that any such extension would violate Lorentz covariance). The many-worlds theory, despite the name (cf Chris @25) doesn’t make any alterations to the mathematics; it just takes the “states are physical” assumption and applies it to the Universe as a whole.”

    I’m pretty sure Prof. Valentini could cite some motivations for pilot wave theory. Anyhow, given that de Broglie/Bohm reproduces so many results of QM is it truly well established that pilot wave theory is genuinely different mathematics, rather than a different formulation? Matrix mechanics, wave mechanics, path integrals are all equivalent. What is the theorem(s) that show pilot waves aren’t? Not to be a bug on a plate, but this proof would be interesting to know about.

    As for the objection that the extension of pilot waves would violate Lorentz invariance, it seems that in one respect the division between microscopic and macroscopic could reasonably be interpreted as a transition between different inertial frames. In this sense, is it not possible that orthodox quantum physics also is not Lorentz invariant under all conditions? The whole discussion is about how quantum physics, with its superpostions and unidirectional time doesn’t seem to apply in the everyday world. It might be an odd way of thinking but reading the loss of Lorentz covariance being the reason uniquely quantum phenomena disappear on a macroscopic level could be useful, possibly?

    Despite the enormous experimental evidence for QM and the consistency of the mathematics, quantum physics cannot model the things we see. This may not technically be a paradox but it certainly provokes these endless discussions. Certainly it is why everyday speech calls quantum physics paradoxical.

    Many worlds seems to scoff at the conservation of energy. I tended to think that was in the math too. Where did it go, and how was its disappearance motivated? Perhaps the MWI means that the observable universe may be defined as those spacetime events that obey this symmetry and all the others are in principle unobservable. But where does that come from the math?

    It’s not that I can really judge between pilot wave and many worlds. But it seems clear that quantum weirdness just does not get abolished in any framework. PIlot wave lumps it all into the pilot wave, while many worlds lumps it into the multiverse. The mathematics are consistent. But if we are going to incorporate quantum physics into our model of the universe, an interpretation, even if its weirdness requires years to get used to, needs to be consistent with the everyday world. Or its not a scientific explanation. Or so I think. Relativistic weirdness was gradually assimilated, just like fields or the distinction between momentum and KE. If we can figure out the right kind of quantum weirdness we can get used to it.

  11. David Wallace #58– Once again, I appreciate your thoughtful response. Let me address your points in turn.

    You write “if a density operator is to be interpreted probabilistically then we can perfectly well consider probability distributions over non-orthogonal states, in which case the problem of indeterminacy recurs.”

    That seems like a significant logical leap, and an unexamined assumption. Why can we perfectly well consider probability distributions over non-orthogonal states? Even classically, a probability distribution is only sensible when it’s defined over a set of mutually exclusive possibilities — it’s nonsense to say that you’re considering a probability distribution for a coin of 50%=heads, 50%=metal. Analogously, the inability to define a density matrix whose eigenvectors are spin-z and spin-x is just telling you that spin-z and spin-x are not mutually exclusive in quantum mechanics.

    Indeed, suppose you forget this and try to define a classical probability distribution over non-orthogonal spin-z and spin-x states, respectively |up> and |right>. If you forget about density matrices and just try to demand that they have classical respective probabilities p and (1-p), then if you try to compute any expectation values by manually weighting the matrix elements by p and (1-p), you find that the answer is equivalent to tracing the corresponding observable against the density matrix ρ obtained by naively adding p|up><up| + (1-p)|left><left|. So all predictions and all observed experimental results are equivalent to assuming that this particular ρ is the density matrix of the system, but if we diagonalize ρ we see that its true ontic basis is something else entirely.

    So quantum mechanics is telling you that certain probability distributions are as senseless as, like I said, 50%=heads, 50%=metal, because the possibilities aren't mutually exclusive. When you try to go ahead anyway, quantum mechanics calls you out for picking an invalid probability scheme and forces a different probability distribution on you — talk about a helpful, responsible theory!

    Next you write "the density operators of microscopic systems (like components of singlet states) can’t in general be consistently treated as probabilistic mixtures of pure states, because we can do interference experiments to rule out that interpretation." That's only true when we enlarge our scope beyond the system whose density matrix we're considering, but then we're cheating because we're really supposed to use that larger system's density matrix.

    In other words, say I have a couple of entangled particles. If I compute one of their density matrices, then that tells me about the ontology at that moment for that one particle. Its dynamics will be nonlinear, because the system is open to the interactions of the other particles, but if I write down the appropriate Lindblad equation, then I can in principle compute the particle's density matrix at all times, and then I can predict its ontology and all observations about it at all times.

    But if I enlarge my scope to include the other particles, then the one particle's density matrix isn't enough, but it's obvious why: Now I'm asking about a different system — a larger system — and so I need to use that larger system's density matrix. The larger system's density matrix may look quite different, and have a different-looking ontology, but it's a different system, so that's okay — insisting that the ontologies be classically agreeable is just a naive classical assumption.

    For macroscopic systems, that weirdness is much less likely — decoherence propagates outward at essentially the speed of light due to irreversible thermal radiation, so systems and their slightly-larger supersystems have a common ontology.

    But in all situations, we can consistently say that the ontology of a specific system is determined by that system's — and only that system's — own density matrix, with epistemic probabilities dictated by the density matrix's eigenvalues. The weirdness that slightly enlarging the system makes the ontology look suddenly very different is what occurs more often for tiny systems than big ones, but it doesn't alter the axiomatic structure laid out in this paragraph.

    You write that "the natural reading of the maths does seem to be that the reason the open-system equation applies is that on a larger scale we’ve still got closed-system dynamics." But we never have a truly closed system, at least in a universe that's open (in the Einstein sense) — there's always irreversible thermal radiation outward. By localizing questions of ontology — look at a particular system's density matrix to determine its ontology — we eliminate the need to worry about making this interpretation depend on the question of whether the whole universe is an open or closed system.

    Indeed, because there aren't really any actually, perfectly closed systems in Nature, there are only Lindblad equations — the Schrodinger equation is always just an approximation. But that doesn't mean that a unique direction of time is automatically picked out for the universe — there's still the question of why all actual Lindblad equations for all large physical systems in our universe seem to line up, describing an increase in entropy in the same time direction. So the arrow of time question is not trivially resolved at all — it's just as open and interesting a question as when you assume that the Schrodinger equation governs everything.

    I'm not a Bohmian, but I don't agree with your statement that Bohmian mechanics necessarily violates Lorentz invariance. If you take the wave-functional of a quantum field, Psi(phi(x)), and write down its "Schrodinger equation", then taking the polar decomposition of Psi yields a pair of Bohm dynamical equations as before, and it's all perfectly relativistically invariant. Now the degrees of freedom aren't particle positions, of course, but field values, but it all works fine, at least for bosons.

    The trouble is fermions — a fermionic field doesn't have ordinary-number classical values, but Grassmann anticommuting values. I've never seen a convincing way around this problem.

    Your comments are greatly appreciated.

  12. Pingback: Thanksgiving 2011: Gene Distribution and Other Topics « blueollie

  13. As a working scientist who is only slightly educated about philosophy… that seems like a piss-poor definition of instrumentalism. We don’t introduce the idea of an electron to understand one experiment, we introduce the idea of an electron to understand all relevant experiments – those relating to atomic structure (spectroscopy), those relating to electricity, those relating to synchroton radiation, those relating to chemical reactions, etc. From Occam to Newton to now, no one has ever thought it was a good idea to introduce a concept to explain one thing.

    You can wonder if a concept that explains all observations is “real” or not, but at that point you’re debating definitions of words that have no impact, as far as I can tell. In addition, most physicists understand these things are just approximations, so the answer to the question “is an electron real” is 1) probably not, and 2) who cares, anyway?

    There are a host of relevant issues I’m ignorant of, I’m sure. And I’m glad someone is working on the philosophy of QM – scientists are indeed generally content to just ignore such thorny problems. But your presenting a ridiculous straw man of instrumentalism doesn’t make me trust you…

  14. @Charon, one experiment or many isn’t relevant to Wallace’s criticism of instrumentalism. He’s saying the proposed elements of reality (be they electrons or wave functions or whatever) aren’t just a convenient formalism for predicting the results of [any number of] experiments. Rather, the whole *point* of the experiments is to probe the nature of reality.

    The point of scientific experimentation is to learn about the world, not just to learn what the result of a particular experiment will be. To illustrate this: If you had a magic oracle that could tell you what the result of any experiment would be, but you had no underlying theoretical understanding of why those experiments gave those results, you wouldn’t just declare science complete. (At least, I should hope not.)

    If the wave function isn’t real, but is just a calculational tool, then it’s no better than an oracle. Sure, you know how the inputs were manipulated to get the outputs, but if it doesn’t describe the underlying elements of reality, then the question of why its predictions hold true is an open one.

    Regarding electrons being an approximation, sure they may be, but an approximation to what? Likewise, if the wave function isn’t real, what is? You can’t tell me scientists don’t care what the underlying nature of reality is. Until somebody mentions quantum mechanics, everyone admits they care very much. As Wallace said, particle physicists want to know if there’s a Higgs boson, cosmologists want to know what dark energy is, etc. They don’t just want a systematic but inexplicable procedure for predicting what happens when you smash protons together at high speed, or point telescopes at distant galaxies, or perform whatever other relevant experiments there may be.

    “Shut up and calculate” is a cop-out. It’s the equivalent of a paleontologist saying, “As long as we know that pretending dinosaurs existed and extrapolating from there will lead us to correct conclusions about the fossil record (or whatever else paleontologists study), then we don’t care if dinosaurs actually existed or not. That’s a question for *philosophers*!”

  15. In short, “What is the nature of the physical world?” is a scientific question. In fact, it’s the essential scientific question, of which all other such questions are just refinements. Answering that question is the fundamental motivation for scientific experimentation.

    Of course any proposed description of the physical world should be useful for predicting the results of experiments (otherwise it’s untestable), but that doesn’t mean it’s *just* a tool for predicting the results of experiments.

  16. With all the talk about decoherence and expanding bubbles of decoherence due to thermal radiation, aren’t we forgetting the crucial point about the von Neumann/Dirac definitions of QM? At the instant of a “measurement” (or observation, whatever that is) the quantum state instantaneously jumps to an Eigenstate – non-locally and super-luminally. Experiment after experiment have confirmed this uncomfortable fact. We immediately go from the nice linear, local progression of amplitudes into a non-linear, non-local realm and then back again, whereupon the linear evolution of states resumes. This isn’t “consistent mathematics”. It’s an inconsistent step in an algorithm that we follow that turns out to give correct answers, an internal inconsistency within the basic framework of QM. And it’s a major reason why relativity and QM are inconsistent at their core. (If Wigner’s friend on Alpha Centauri had a system that was still entangled with the system on Earth, it wouldn’t take 4+ light-years for a measurement of one to affect the other.)

    We’ve been just “shutting up and calculating” for about 80 years now, just executing the algorithm without being able to explain how or why it works, even as the math was expanded and extrapolated to our current QM field theories. There is no deep “mathematical logic” behind it; it’s just a procedure we’ve learned to follow. So let’s not forget that this is still what we’re doing — just following the algorithm without understanding it. And keep trying to make the huge intellectual leap this understanding seems to require.

    On a related note: see the papers by David Malament (1996) and Hans Halvorson & Rob Clifton, among others, showing that it is impossible to have a relativistic field theory of localizable “particles”. So, we should also be careful about what we ascribe to the “wave functions” that describe the events we describe as “particle” detections, since we don’t really know what those “particles” are, either.

  17. (Comment Cont’d)
    Having said that, many recent experiments (such as precise timings of electron state transitions in molecules, and the incremental manipulation of states in super-cold atoms) seem to invalidate the Copenhagen “interpretation” of the QM realm as being unknowable (The Copenhagen interpretation was actually multiple interpretations that changed over time, and was intentionally never very clear). Perhaps as our technological prowess continues to deepen, we can discover some keys to the puzzle that have so far eluded us. Whether our carefully developed mathematical logic (and intuitions) will continue to apply to the physical world remains to be seen. Past appeals to an underlying order based on circles or pentagrams never quite worked out, despite their inherent logic and beauty, but we managed to find other approaches. We may be at another of those transition points — uncomfortable and frustrating while it’s happening, but necessary to be able to move on. With the flood of new data now coming to us, it would seem premature to state that it is beyond our ability to comprehend and to correct our past misconceptions. But to do so, we need to be honest about just how fundamental those misconceptions really are. Learning how to accurately state the problems will be a crucial step in resolving them.

    P.S. – Link to Malament paper: http://www.socsci.uci.edu/~dmalamen/bio/papers/InDefenseofDogma.pdf

  18. WTW #67, #68

    You write “With all the talk about decoherence and expanding bubbles of decoherence due to thermal radiation, aren’t we forgetting the crucial point about the von Neumann/Dirac definitions of QM? At the instant of a ‘measurement’ (or observation, whatever that is) the quantum state instantaneously jumps to an Eigenstate – non-locally and super-luminally. Experiment after experiment have confirmed this uncomfortable fact. We immediately go from the nice linear, local progression of amplitudes into a non-linear, non-local realm and then back again, whereupon the linear evolution of states resumes. This isn’t ‘consistent mathematics’. It’s an inconsistent step in an algorithm that we follow that turns out to give correct answers, an internal inconsistency within the basic framework of QM.”

    Your statements indicate some misunderstandings about decoherence and density matrices — there’s no more “inconsistent mathematics” here. The system is open during a measurement, and open systems evolve nonlinearly according to the appropriate Lindblad equation; you can show that this causes precisely the sort of hyper-fast nonlinear evolution that we expect from a measurement. You can even compute how long the “collapse” takes to happen.

    Citing the “von Neumann/Dirac” definitions as evidence that quantum mechanics is inconsistent is like citing Aristotle as evidence that biology is inconsistent — the subject has changed over the intervening eons.

  19. “If you had a magic oracle that could tell you what the result of any experiment would be, but you had no underlying theoretical understanding of why those experiments gave those results, you wouldn’t just declare science complete.”

    No, but not because of some fetish for a “theoretical understanding”, but because such a magic oracle would be of little help for designing novel experiments/instruments/gadgets/whatnot.

  20. I’m a physicist working on quantum computing, and I think this paper is garbage. I’ll remind everyone that this is not a peer reviewed paper. It seems that they are trying to put forth another test for hidden variable theories. I’m very surprised that these people don’t mention anything about John Bell in the paper. The Bell Inequalities (circa 1964) give conditions that will tell you definitively whether or not these hidden variables exist, and they have been experimentally confirmed in the 1970s. Then the 1980s. Then the 1990s. Bell’s inequalities are basically the first proof any group must show that they created an entangled state, they are accepted as fact and put the EPR paradox to rest; quantum mechanics is a complete theory, and no, you can’t think of quantum mechanical objects as classical objects. Sure, wavefunctions are not just statistical tools, I don’t think anyone who has done any substantial quantum mechanics thinks that anymore, especially not any labrats in the AMO field. But they sure aren’t ‘physical’ in the sense that you can pick up a wavefunction and look at it. You need a projection basis, a measurement. The eigenvalues that describe the state describe the physical readouts that we can perform. This paper tries to create a non entangled state and read out in an entangled basis. It all sounds like a sleight of hand trick to me, I wouldn’t give the paper too much attention. It actually looks pretty similar to this paper: http://prl.aps.org/abstract/PRL/v107/i17/e170404 -> two uncoupled states (one pure, one a superposition) that look like an entangled state.

  21. Pingback: Tiny Clues to Antarctica's Past - Antarctica's Climate Secrets

  22. Think of light as an oscillating soap bubble of EM radiation which can “pop” once on one detector only. Think of a photon as the smallest piece of the bubble allowed using the Planck constant and frequency. The concept of photon is useful to simplify maths, but can lead to a misunderstanding of experimental results. The photon has no meaning except as a mathematical tool of QED. The soap bubble model resolves the paradoxes found in polarization experiments, laser beam splitting and all the rest. Bell’s theorem cannot be applied to a single 3D object to rule out hidden variables. One final observation. Oscillating EM bubbles have a small amount of mass. The so-called dark matter is light itself.

Comments are closed.

Scroll to Top