Guest Post: David Wallace on the Physicality of the Quantum State

The question of the day seems to be, “Is the wave function real/physical, or is it merely a way to calculate probabilities?” This issue plays a big role in Tom Banks’s guest post (he’s on the “useful but not real” side), and there is an interesting new paper by Pusey, Barrett, and Rudolph that claims to demonstrate that you can’t simply treat the quantum state as a probability calculator. I haven’t gone through the paper yet, but it’s getting positive reviews. I’m a “realist” myself, as I think the best definition of “real” is “plays a crucial role in a successful model of reality,” and the quantum wave function certainly qualifies.

To help understand the lay of the land, we’re very happy to host this guest post by David Wallace, a philosopher of science at Oxford. David has been one of the leaders in trying to make sense of the many-worlds interpretation of quantum mechanics, in particular the knotty problem of how to get the Born rule (“the wave function squared is the probability”) out of the this formalism. He was also a participant at our recent time conference, and the co-star of one of the videos I posted. He’s a very clear writer, and I think interested parties will get a lot out of reading this.

———————————-

Why the quantum state isn’t (straightforwardly) probabilistic

In quantum mechanics, we routinely talk about so-called “superposition states” – both at the microscopic level (“the state of the electron is a superposition of spin-up and spin-down”) and, at least in foundations of physics, at the macroscopic level (“the state of Schrodinger’s cat is a superposition of alive and dead”). Rather a large fraction of the “problem of measurement” is the problem of making sense of these superposition states, and there are basically two views. On the first (“state as physical”), the state of a physical system tells us what that system is actually, physically, like, and from that point of view, Schrodinger’s cat is seriously weird. What does it even mean to say that the cat is both alive and dead? And, if cats can be alive and dead at the same time, how come when we look at them we only see definitely-alive cats or definitely-dead cats? We can try to answer the second question by invoking some mysterious new dynamical process – a “collapse of the wave function” whereby the act of looking at half-alive, half-dead cats magically causes them to jump into alive-cat or dead-cat states – but a physical process which depends for its action on “observations”, “measurements”, even “consciousness”, doesn’t seem scientifically reputable. So people who accept the “state-as-physical” view are generally led either to try to make sense of quantum theory without collapses (that leads you to something like Everett’s many-worlds theory), or to modify or augment quantum theory so as to replace it with something scientifically less problematic.

On the second view, (“state as probability”), Schrodinger’s cat is totally unmysterious. When we say “the state of the cat is half alive, half dead”, on this view we just mean “it has a 50% probability of being alive and a 50% probability of being dead”. And the so-called collapse of the wavefunction just corresponds to us looking and finding out which it is. From this point of view, to say that the cat is in a superposition of alive and dead is no more mysterious than to say that Sean is 50% likely to be in his office and 50% likely to be at a conference.

Now, to be sure, probability is a bit philosophically mysterious. It’s not uncontroversial what it means to say that something is 50% likely to be the case. But we have a number of ways of making sense of it, and for all of them, the cat stays unmysterious. For instance, perhaps we mean that if we run the experiment many times (good luck getting that one past PETA), we’ll find that half the cats live, and half of them die. (This is the Frequentist view.) Or perhaps we mean that we, personally, know that that the cat is alive or dead but we don’t know which, and the 50% is a way of quantifying our lack of knowledge. (This is the Bayesian view.) But on either view, the weirdness of the cat still goes away.

So, it’s awfully tempting to say that we should just adopt the “state-as-probability” view, and thus get rid of the quantum weirdness. But This doesn’t work, for just as the “state-as-physical” view struggles to make sense of macroscopic superpositions, so the “state-as-probability” view founders on microscopic superpositions.

Consider, for instance, a very simple interference experiment. We split a laser beam into two beams (Beam 1 and Beam 2, say) with a half-silvered mirror. We bring the beams back together at another such mirror and allow them to interfere. The resultant light ends up being split between (say) Output Path A and Output Path B, and we see how much light ends up at each. It’s well known that we can tune the two beams to get any result we like – all the light at A, all of it at B, or anything in between. It’s also well known that if we block one of the beams, we always get the same result – half the light at A, half the light at B. And finally, it’s well known that these results persist even if we turn the laser so far down that only one photon passes through at a time.

According to quantum mechanics, we should represent the state of each photon, as it passes through the system, as a superposition of “photon in Beam 1” and “Photon in Beam 2”. According to the “state as physical” view, this is just a strange kind of non-local state a photon is. But on the “state as probability” view, it seems to be shorthand for “the photon is either in beam 1 or beam 2, with equal probability of each”. And that can’t be correct. For if the photon is in beam 1 (and so, according to quantum physics, described by a non-superposition state, or at least not by a superposition of beam states) we know we get result A half the time, result B half the time. And if the photon is in beam 2, we also know that we get result A half the time, result B half the time. So whichever beam it’s in, we should get result A half the time and result B half the time. And of course, we don’t. So, just by elementary reasoning – I haven’t even had to talk about probabilities – we seem to rule out the “state-as-probability” rule.

Indeed, we seem to be able to see, pretty directly, that something goes down each beam. If I insert an appropriate phase factor into one of the beams – either one of the beams – I can change things from “every photon ends up at A” to “every photon ends up at B”. In other words, things happening to either beam affect physical outcomes. It’s hard at best to see how to make sense of this unless both beams are being probed by physical “stuff” on every run of the experiment. That seems pretty definitively to support the idea that the superposition is somehow physical.

There’s an interesting way of getting around the problem. We could just say that my “elementary reasoning” doesn’t actually apply to quantum theory – it’s a holdover of old, bad, classical ways of thinking about the world. We might, for instance, say that the kind of either-this-thing-happens-or-that-thing-does reasoning I was using above isn’t applicable to quantum systems. (Tom Banks, in his post, says pretty much exactly this.)

There are various ways of saying what’s problematic with this, but here’s a simple one. To make this kind of claim is to say that the “probabilities” of quantum theory don’t obey all of the rules of probability. But in that case, what makes us think that they are probabilities? They can’t be relative frequencies, for instance: it can’t be that 50% of the photons go down the left branch and 50% go down the right branch. Nor can they quantify our ignorance of which branch it goes down – because we don’t need to know which branch it goes down to know what it will do next. So to call the numbers in the superposition “probabilities” is question-begging. Better to give them their own name, and fortunately, quantum mechanics has already given us a name: amplitudes.

But once we make this move, we’ve lost everything distinctive about the “state-as-probability” view. Everyone agrees that according to quantum theory, the photon has some amplitude of being in beam A and some amplitude of being in beam B (and, indeed, that the cat has some amplitude of being alive and some amplitude of being dead); the question is, what does that mean? The “state-as-probability” view was supposed to answer, simply: it means that we don’t know everything about the photon’s (or the cat’s) state; but that now seems to have been lost. And the earlier argument that something goes down both beams remains unscathed.

Now, I’ve considered only the most straightforward kind of state-as-probability view you can think of – a view which I think is pretty decisively refuted by the facts. It’s possible to imagine subtler probabilistic theories – maybe the quantum state isn’t about the probabilities of each term in the superposition, but it’s still about the probabilities of something. But people’s expectations have generally been that the ubiquity of interference effects makes that hard to sustain, and a succession of mathematical results – from classic results like the Bell-Kochen-Specker theorem, to cutting-edge results like the recent theorem by Pusey, Barrett and Rudolph – have supported that expectation.

In fact, only one currently-discussed state-as-probability theory seems even half-way viable: the probabilities aren’t the probability of anything objective, they’re just the probabilities of measurement outcomes. Quantum theory, in other words, isn’t a theory that tells us about the world: it’s just a tool to predict the results of experiment. Views like this – which philosophers call instrumentalist – are often adopted as fall-back positions by physicists defending state-as-probability takes on quantum mechanics: Tom Banks, for instance, does exactly this in the last paragraph of his blog entry.

There’s nothing particularly quantum-mechanical about instrumentalism. It has a long and rather sorry philosophical history: most contemporary philosophers of science regard it as fairly conclusively refuted. But I think it’s easier to see what’s wrong with it just by noticing that real science just isn’t like this. According to instrumentalism, palaeontologists talk about dinosaurs so they can understand fossils, astrophysicists talk about stars so they can understand photoplates, virologists talk about viruses so they can understand NMR instruments, and particle physicists talk about the Higgs Boson so they can understand the LHC. In each case, it’s quite clear that instrumentalism is the wrong way around. Science is not “about” experiments; science is about the world, and experiments are part of its toolkit.

73 Comments

73 thoughts on “Guest Post: David Wallace on the Physicality of the Quantum State”

  1. Frank Martin DiMeglio

    Our very thoughts are ultimately, to a limited extent of course, integrated and interactive with sensory experience (including gravity, inertia, and electromagnetism). Any complete and accurate explanation/description of physical phenomena/sensory experience — including vision — (at bottom) has to address this; and with this, of course, the completeness of thoughtful description, as it is related thereto, follows/pertains. The experience of the body cannot be avoided in any final/true/fundamental explanation, description, or examination of physics/sensory experience and thought/genius.

  2. Thank you! Thank you! Thank you! You have crystallized the vague discomfort that I had with Tom Banks’ essay. The difference between amplitude and probability distribution is at the root of why all the classical analogies don’t work.

    I’ll note that anyone who has bet on horse racing has an idea of probability rescaling, and that is not the mystery. The mystery arises because quantum horse races are represented by amplitudes.

  3. Pingback: Interpretation of Quantum Mechanics in the News | Quantum Mechanics Blog

  4. science is not about experiments? well science is a method and one of the steps in the method is experiments is not it? so experiment is an important part of it.

    not sure what to think about this wavefunction thing. For instance this morning i was working from home, suddenly i started hearing a noise in the house, turned out it was an old alarm setup to ring at noon. Since I usually am at work I never turned it off. should i have gone to the office the alarm would keep going on an on. But then i started thinking, if no body would be home the alarm would really trigger or was it my presence that triggered it?. That got me thinking, according to quantum mechanics The alarm would be in a superposition state triggering and not triggering at the same time. However I happened to listen the alarm. The thing is I had not intention to listen it (aka measure it).

    looks like the alarm superposition state is independent and exist and is always triggering and not triggering and it is only my sensorial limitation of the world that makes me see it in one way or another. in other words it seems the probability applied to me not the alarm itself.

  5. Please forgive me if I offend with my naiveness (naivete?):

    “So, just by elementary reasoning – I haven’t even had to talk about probabilities – we seem to rule out the ‘state-as-probability’ rule.”

    Can one rule in (or rule out) such a thing based on words alone? I think I understand the appeal of using ‘elementary reasoning’ to properly understand certain things. But my understanding is that QM can be understood properly only in the language of mathematics, and so ruling something in or out regarding QM must make use of the math.

    Kevin

  6. My thought when reading this is to think of what we mean when we say photon. We have to think of photons in terms of events.

    A change of color at a point in some photopaper is an event we might associate with a photon. Why we call it a photon is because we have observed that under certain circumstances, we can finitely categorize certain types of events and associate them with things we call particles. For instance, we associate certain tracks in a bubble chamber with certain types of particles. We can do this because we recognize very distinct types of patterns.

    In practicle terms, this is no different from ones ability to identify and classify certain types of moves performed by an iceskater or dancer, or even certain plays executed by a football team on the field. None of these descriptions have meaning outside the context of human interpretation. Whether the patterns exist and obey some abstract relational ruleset is interesting, but our choice of description is completely arbitrary.

    In any case, in physics we can observe direct cause and effect relationships between certain events that lead to certain patterns. The experimentalist view plays on this notion that what is evolving over time is not a physical thing, but rather some ability to predict the outcome of a causal chain. We sometimes might define something like the scattering matrix to keep all of the possible outcomes organized, and we might place some amplitude into each block in order to identify the relative likelyhood of a particular pattern emerging after a particular event. However, we have to keep in mind that it is ultimately ourselves that have constructed such convenient mnemonics for our own purposes and not nature for its purposes.

    If we want to get to what is “real” we have to think of things abstractly, much in the way that we think of groups as abstract entities, where it is the relationship between elements that define the group.

    So our determination of amplitude is a determination of the strength of the relationship between elements of what we think of as reality. Those elements have certain finite quantities that determine the shape of our more continuous view of the world. Its the relationship between possible outcomes that we want to keep track of. So when we think of the cat, we want to think of how the relationship between the outcomes evolve. just as we want to think about how the relationship of two entangled photons evolve as they go off to distant corners, and not about how the photons evolve.

    So in this sense, the experimentalist view might seem weak, but it actually is quite strong, because it recognizes that what we are interested in is not the physical object but the relationship between physical objects. That relationship is the hallmark of reality and not vice versa.

  7. Is quantum theory with the infinite nature hypothesis actually the most fundamental theory of nature? Consider some of Edward Fredkin’s ideas on nature:
    (1) At some scale, space, time, and energy are discrete.
    (2) The number of possible states for every unit volume of space-time is finite.
    (3) There are no infinities, infinitesmals, or locally generated random variables.
    (4) The fundamental process of nature must be a simple deterministic digital process.
    http://en.wikipedia.org/wiki/Edward_Fredkin
    Consider 3 fundamental hypotheses:
    (1) Nature is finite and digital with Fredkin-Wolfram information underlying quantum information.
    (2) There are a finite number of alternate universes, and this number can be calculated using modified M-theory with Wolfram’s mobile automaton.
    (3) The maximum physical wavelength equals the Planck length times the Fredkin-Wolfram constant.
    http://en.wikipedia.org/wiki/Stephen_Wolfram

  8. Low Math, Meekly Interacting

    But photons aren’t anything like dinosaurs, so I fail to see why that or any of the other examples given are even relevant to the instrumentalist interpretation of quantum mechanics. I fail to see why agnosticism about processes or states (e.g. superpositions) that cannot be observed in the quantum realm (as opposed to the paleontological realm) are so untenable.

  9. #11: Dinosaurs are exactly like (several) photons. That’s the whole point of physics. If you just impose a distinction between dinosaurs and photons you’re back to the Copenhagen interpretation. Which, for all its flaws, is at least honest about putting in a sharp cutoff in this way.

  10. What if light really just expands out from its source and particles and waves are effects of disturbances to this light? Quanta seem to be the amount absorbed by atoms, ie. the basis of mass, so when encountering mass, light collapses to the point of being absorbed by individual atoms. On the other hand, when passing around things, such as going through those slits, it is disturbed, causing fluctuations and ripples, thus creating the impression of waves, but it isn’t that light is waves in some medium, but that light is the medium and waves are the disturbance of this medium that are necessary to our observing it.
    As for probabilities, when we think of the passage of time, it is from past events to future ones, but the process creating time is of continual change and action, such that it is the events which go from being in the future to being in the past. A future cat has the probability of being either dead or alive, but it is the actual event of its potential demise that determines its health. Just as the actual event of our observing its state determines our level of knowledge. Consider the horse race; Prior to its occurrence, there are multiple possible winners, but it is the actual events of the race which determines just which one really does win. It’s not that we travel the fourth dimension from yesterday to tomorrow, but that tomorrow becomes yesterday, because lots of things occur, most especially the rotation of the planet.

  11. Low Math, Meekly Interacting

    Part of the problem is that dinosaurs don’t appear to behave at all like photons in a double-slit experiment, and we will never, ever be able to observe a dinosaur interfering with itself. Isn’t that fact part of the interpretive debate? That we perceive superpositions, only infer their existence, and then almost exclusively from results gleaned from of exquisitely sensitive apparati harboring isolated microscopic objects? Doesn’t that lead us right to the measurement problem?

    I actually find decoherence (to the extent I understand it) the most satisfying, but anyone who claims such a preference is eventually confronted by someone who insists you acknowledge the existence of all the other possibilities out there somewhere, and maybe asks you to take a bet that involves some version of you on a decoherent branch of the wavefunction winding up deceased. I guess I prefer agnosticism to all that.

  12. I’m a practicing theoretical physicist, and I don’t understand all the confusion — please someone explain it to me.

    We already have a natural object in QM with a statistical interpretation, namely, the density matrix. And density matrices are the natural generalization of classical probability distributions. In classical mechanics, the probability distribution is over classical states, and in quantum mechanics, the density matrix probability eigenvalues are a distribution over quantum state vectors.

    If we take the view that a density matrice’s eigenvalues are a probability distribution over its eigenvectors, and regard those eigenvectors (which are state vectors) as real, physical possible states (just like we treat classical states underlying a classical probability distribution as real, physical possible states), then we never run into contradiction with observation. So what’s stopping us from taking that point of view?

    To say that state vectors themselves are statistical objects is to say that there are two levels of probability in quantum mechanics. But why give up the parsimony of having only one level of probability in QM if it’s not needed? And it’s not!

    When you make use of decoherence properly, you see that all probabilities after measurements always end up arising through density matrix eigenvalues automatically. And you automatically find that for macroscopic objects in contact with a realistic environment, the density-matrix eigenbasis is essentially always highly-classical-looking with approximately-well-defined properties for all classical observables — all this comes out automatically.

    So there’s no reason to insist on regarding state vectors as statistical objects. We can regard them as being as real and physical as classical states, even though for isolated, microscopic systems, they don’t always have well-defined properties for all naive classical observables — but why should weirdness for microscopic systems be viewed as at all contradictory? And again, in particular, the probabilities end up as density-matrix eigenvalues anyway, so why bother insisting on having a second level of probability at all?

    As for Schrodinger’s cat, the fact is that any realistic cat inside a reasonable-size non-vacuum environment is never going to stay in a weird macroscopic superposition of alive and dead for more than a sub-nanosecond, if that — its density matrix will rapidly decohere to classicality. The only way to maintain a cat (with its exponentially huge Hilbert space) in an alive/dead superposition for an observable amount of time is to place the cat in a near-vacuum at near-absolute zero, but then you can be sure it’s going to be dead.

    What about putting it in a perfectly-sealed box in outer space? Well, even in intergalactic space, the CMB causes a dust particle to decohere to classicality in far less than a microsecond. So it just doesn’t happen in everyday life — and if that’s the case, then why are we worried that it should seem counterintuitive?

    So the whole Schrodinger-cat paradox is a complete unphysical fiction and a red herring, unless you do it with an atom-sized Schrodinger-kitten — and that’s been done experimentally!

  13. There are several interesting things to note about David Wallace’s post, but let me first deal with his contention that QM probabilities should not be thought of as probabilities. Let me first talk about observations at a fixed time

    For every quantum state and every Hermitian operator, the math of QM allows you to calculate tr D A^n where D is the density matrix corresponding to the state, A is the operator and n is an arbitrary integer. From these quantities you can extract a bunch of positive numbers, summing to one, which QM claims are the probabilities for this operator to take on each of its possible values.
    The interpretation of this math is that if you prepare the system repeatedly, in the state D, and then measure A by coupling to a macroscopic system whose pointer points to a different value for each of the different eigenstates of A, then the frequency of observation of a particular value will be equal to the positive number extracted from the calculation. These predictions have, of course, been tested extensively, and work.

    Many of the things one normally talks about in these discussions involve probabilities of HISTORIES rather than of observations at a fixed time. I can’t go into the details but GellMann and Hartle, in a series of beautiful papers, have shown that a similar sort of interpretation of mathematical quantities in QM in terms of probabilities of histories as valid when “the histories decohere”. The reason histories are more complicated is that they involve measurements of quantities at different times, which don’t commute with each other.
    However, it’s important to note that GellMann and Hartle use only the standard math of QM (I’m not talking about their attempt to generalize the formalism) and that their interpretation follows from the probability interpretation at fixed time by rigorous mathematical reasoning.

    Given that the math suggests a probabilistic interpretation of fixed time expectation values, which actually reproduces experimental frequencies, I can’t understand a statement that I shouldn’t interpret these things as probabilities just because they don’t satisfy some a priori rule that a philosopher derives from “pure thought” or “elementary reasoning”. The whole point of my post is that “elementary reasoning” is flawed because our brains are not subtle enough. The rigorous mathematical formulation of “elementary reasoning” is mathematical logic, and I think it’s quite interesting, (and IMHO the most interesting thing in this rather stale discussion), that that formalism contains the seeds of its own destruction.

    The other interesting thing about David’s post is that it points up the drastic difference between the modes of thought of theoretical physicists and philosophers.

    Theoretical physics has three parts. It begins with the assumption that there’s a REAL WORLD out there, not just activity going on in our consciousness, and that the only way to access that real world is by doing experiments. I mean experiment in the most general sense of the term. A macroscopic trace left on a distant asteroid in the Andromeda galaxy is an experiment for an observer on Earth, if it is in principle possible for some advanced civilization in the distant future to send out a spacecraft or bounce a beam of light off the
    asteroid to bring back information of that trace.

    The second part of theoretical physics is mathematics.
    We build a mathematical model of the world and compare it to experiment according to some well defined rules. In QM these rules are: calculate the probabilities of different events using the density matrix formula, and compare those probabilities to frequencies in repeated experiments on the same quantum state.
    In quantum cosmology, a subject which is still under construction, we’ll never be able to repeat the experiment so we have to use the more subjective meaning of probability.

    Finally, there’s the story we tell about what these results mean and how they relate to our intuition. It’s a very important part of the whole game, but we’ve learned something very interesting over the years. The story can change drastically into another story that seems inconsistent with the first one, even when the math and the experiments change in a controlled way, whose essence is contained in the word “approximation”. In math an exponential function can be approximated by a linear one when its argument is small enough. In experiment an exponential behavior can look linear when we don’t measure large values of the control parameter. That is, these two features of our framework can change and they change together in a controlled way, with a quantitative measure of the error of approximation.

    Stories however, change in a drastic manner. Newton’s absolute space, and absolute time, Galileo’s velocity addition rule, etc. , if they’re taken as a priori intuitively obvious laws, simply do not admit the possibility of relativity. When I try to explain relativity to laymen, many of them have a hard time understanding how it could be possible that you could run to catch up with something and still have it recede at the same speed.
    It’s easy for someone with a little math background to understand that the correct law of addition of rapidities for parallel velocities, becomes the velocity addition law when an exponential of rapidity is replaced by a linear approximation.

    Philosophers are committed to understanding everything in terms of the story, in terms of words. This approach can work even for relativity, whenever classical logic works. Whatever philosophers call it: “elementary reasoning” , “common sense”, they’re committed to a world view based on a denial of the essence of QM, because QM is precisely the abandonment of classical logic in terms of the more general, inevitably probabilistic, formalism, which becomes evident when one formulates logic in a mathematical way. Just as we use the math of the exponential function to explain the reason that the velocity addition law looks so obvious, we use the math of decoherence theory to explain why it is that using “elementary reasoning” is a flawed strategy.

    Let me end by recommending to you some words of Francis Bacon, which were quoted in an article by Stanley Fish in the NYT some years ago. Unfortunately, I’ve misplaced the computer file with my copy. Bacon, writing at the dawn of experimental science, complained about the ability of men to twist the meaning of words, which made getting the truth by argument impossible. He argued that the only way to get to actual truth was to do reproducible experiments, which could be done independently by different researchers, in order to assess their validity. Modern theoretical physics has another arrow in its quiver, which is mathematical rigor.
    Words, the Story of theoretical physics, are still important, but they should not be allowed to trump the solid foundations of the subject and create unjustified confusion and suspicion of error where none exists.
    It’s sad, but the architecture of our consciousness probably will not allow us to come up with an intuitive understanding of microscopic quantum processes. But mathematics gives us a very efficient way of describing it with incredible precision. To me, there’s every indication that QM is a precise and exact theory of everything, and will survive the incorporation of gravitational interactions, which has so far eluded us apart from certain mathematical models (the theory formerly known as String) with only a passing resemblance to the Real World. Attempts to force it back into the straitjacket of “simple reasoning” are misguided and have not led (and IMHO will not lead) to advances in real physics.

    There are a number of other posts I’d like to respond to, particularly in reference to the Bohm de Broglie Vigier theory and the GRW theory, but I have to admit I’m blogged out. A lot of people have written really incisive things (mostly, of course, supporting my point of view :))
    and a few others have written silly things that I’m tempted to respond to (I certainly don’t consider the things I’ve actually responded to to be silly), but I have to get back to my real research and my life. Does anyone know how to fix LaTeX files so the new arXiv.org robots will accept them?

  14. For me as an outsider, there is something circular in the position of “probabilistic” view:
    – we can’t make sense of QM based on our intuitions of “reality” because our brains are not equipped for (or not train by evolution for) this
    – we should believe what math is saying and not try to interpret that trough our intuitions of “reality”
    – math was invented (discovered?) by human brain so it seams that our brains are equipped for it or were trained enough for it. We come up with something because we can. If math is platonic and we are only discovering it bit by bit (so we can not say that it is comprehensible for us – may be there are parts of it out there that are not) there is a part of “reality” that is non-probabilistic (math) after all.
    – so it seams that we can comprehend math but not what it is saying it terms of our intuition of “reality”. Shouldn’t math be part of this intuition?

  15. According to instrumentalism, palaeontologists talk about dinosaurs so they can understand fossils

    Perhaps David Wallace can kindly spend a few words explaining what would be different if we understood dinosaurs and earth’s long geological history to be concepts created to understand fossils, instead of being as real as what we remember in personal experience. Apart from psychological discomfort, that is.

  16. Let me comment a bit on what David seems to be saying. I’m sure he can defend himself quite capably, but a lot of people seem to be misreading and missing the point pretty badly.

    Everyone in the room more or less agrees on the experimental predictions of quantum mechanics, and how to obtain them; the only issue in dispute is the philosophical one about whether it’s better to think of the quantum state as real/ontic/physical or simply a tool for calculating probabilities. It’s certainly legitimate to not care about this issue, but I think it’s important for physics as well as philosophy, as it helps guide us as we try to push beyond our current understanding.

    Tom’s complaints about “telling stories” and “elementary reasoning” seem to be very much beside the point. The point is that David gave us a particular piece of elementary reasoning, meant to illustrate why it is useful to think of the wave function as really existing, rather than just a crutch for calculating probabilities. It’s okay to take issue with that bit of reasoning, but not with the very idea of using reasoning to understand your theory. Neither David nor anyone else is making appeals to “common sense” or “pure reason,” so that’s a pretty transparent straw man.

    Of course quantum probabilities are probabilities. But the quantum wave function is not a probability — it’s an amplitude. David’s point (or my understanding thereof) is that the wave function serves the same role in explaining where the photon hits the plate as dinosaurs serve in explaining where fossils come from — namely, you can’t do without it. It’s a crucial part of our best explanation, and therefore deserves to be called “real” (or “physical,” if you want to be a bit more precise) by any sensible criterion. You can’t make sense of the outcome of an experiment without believing that something really goes down the different paths of the interferometer.

    There’s no important difference here in the purposes or methods of scientists and philosophers — we’re all trying to understand nature by fitting sensible models/theories/stories (whatever you want to call them) to the empirical data. Again, it’s okay not to care, but the empirical data seem to indicate that people do care, since they can’t stop talking about it (including repeatedly insisting that they don’t care).

    I completely agree that the more interesting part of Tom’s original post was the claim about the inevitability of QM (whether one agrees with that point or not). But if you start discussing that and end by saying that the wave function isn’t real and appreciating this fact answers all the interpretational puzzles of quantum mechanics, you can’t be surprised that the former discussion generates little response. 🙂

  17. As I wrote in the other thread, I wish people who advocate the “realist” stance could be a little more precise on what they think this term might mean in this context. It is clear from the exchanges here that there are different versions of that term, and consequently the wave function might or might not “exist” depending precisely what you mean by that term. Until this notion is formalized, or at least clarified, including an organized discussion on whether or not it assumes classicality, I tend to agree that this seems like an attempt to convey precise and properly formulated statements in a less precise language.. That this would lead to confusion is not all that surprising.

  18. Philosophical questions of this sort are very interesting. Still, direct physical questions are perhaps more important. I have such a question, though it is somewhat unrelated: Can photons attain any frequency?

    My question isn’t really about lower and upper bounds, but let’s start there.

    Upper bound: Since the energy of a photon is E = hv and since the Universe presumably doesn’t have infinite energy, there clearly must be an upper bound on a photon’s frequency. Do we know what this bound is?

    Lower bound: I have no information to guide me here except to say that a photon with a frequency of 0 probably can’t be considered a photon. Do we have any better lower bound?

    And now to my real question. A photon can be created by an electron in an atom falling from a high energy state to a low energy state. But these states and their energy levels are discrete and therefore no matter how many types of atoms or molecules you have, the photons thus released can only cover a finite, discrete range of frequencies.

    OK, but there are other ways to make photons. Such as nuclear reactions. I’m not too famaliar with these reactions, so can you help me here as to whether they can produce a photon of any desired frequency?

    Lastly we have the expansion of the Universe. Photons sent off when the Universe was young have had their frequency reduced due to spacetime expansion. Does this mean these photon’s frequency, as recieved here on Earth, now cover all infinite types of frequencies? I personally don’t see how.

    Here’s hoping you will answer my question! 🙂

  19. As a chemist I was pleased to see Tom Banks in his guest post use a chemical QM example. Working in industry sometimes I use QM calculations to try to understand chemical experiments. However I think everyone in this discussion is trying to avoid the elephant in the room:

    Tom Banks Says:
    “Many of the things one normally talks about in these discussions involve probabilities of HISTORIES rather than of observations at a fixed time. I can’t go into the details but GellMann and Hartle, in a series of beautiful papers, have shown that a similar sort of interpretation of mathematical quantities in QM in terms of probabilities of histories as valid when “the histories decohere”.

    Matt Says:
    “If we take the view that a density matrice’s eigenvalues are a probability distribution over its eigenvectors, and regard those eigenvectors (which are state vectors) as real, physical possible states (just like we treat classical states underlying a classical probability distribution as real, physical possible states), then we never run into contradiction with observation.”

    Matthew F. Pusey et al (http://arXiv:1111.3328v1) Say:
    “In some versions of quantum theory, on the other hand, there is no collapse of the quantum state. In this case, after a measurement takes place, the joint quantum state of the system and measuring apparatus will contain a component corresponding to each possible macroscopic measurement outcome. This is unproblematic if the quantum state merely reflects a lack of information about which outcome occurred. But if the quantum state is a physical property of the system and apparatus, it is hard to avoid the conclusion that each macroscopically different component has a direct counterpart in reality.”

    Sean Says:
    “But the quantum wave function is not a probability — it’s an amplitude. David’s point (or my understanding thereof) is that the wave function serves the same role in explaining where the photon hits the plate as dinosaurs serve in explaining where fossils come from — namely, you can’t do without it. It’s a crucial part of our best explanation, and therefore deserves to be called “real” (or “physical,” if you want to be a bit more precise) by any sensible criterion. You can’t make sense of the outcome of an experiment without believing that something really goes down the different paths of the interferometer.”

    The elephant in the room is of course that if the state vector is real then the state vector + decoherence gives rise to a set of HISTORIES (Tom Banks’ capitalization) that are all equally real or as Pusey puts it “each macroscopically different component has a direct counterpart in reality.” The Everett many-worlds interpretation is alive and kicking even if Tom Banks doesn’t like it.

  20. I think Tim Maudlin’s comment #6 at related “Tom Banks” thread was very apt, that loss of interference does not really explain why we don’t see superpositions of both states (I would say, messy superpositions then, not “one outcome”) IOW, to me, the degree of interference determines what *kind* of statistics we see, with their very existence being derived elsewhere. Interference does not typically determine *whether* we find statistics and single outcomes of one of the superpositions. Note the irony, that preserved interference as in double slit *does* produce statistics that models the interfering waves, it does not lead to literal continuing wave amplitudes or our finding one photon multiply realized all over the screen! (BTW, note also that in MWI we really are violating conservation laws. No matter the excuse why we “can’t see” (to quote that sloppy phrase) the other instantiations of a single (!) particle, MWI deviates (despite the enthusiasts’ sloppy protestations) from genuine Schroedinger evolution as soon as the sum total of mass-energy claimed by all “observers” exceeds the original value.
    http://tyrannogenious.blogspot.com

  21. Moshe, here’s my try. In the two-slit experiment with photons, we have no difficulty in saying that the two slits exist. Does the photon wave function exist in the same sense as whatever is providing the two slits? Or are both the photon wavefunction (and whatever provides the two slits) merely computational devices? If the photon wavefunction is a computational device and the two slits – ultimately composed of quantum objects – are real, how does the transition for computational device to reality take place?

  22. Matt #15: but decoherence DOESN’T turn a superposed cat into a dead or alive cat with certain probabilities, that’s wavefunction collapse you’re talking about. Decoherence turns a superposed cat into a superposed cat that we can no longer do interference experiments with.
    Without any additional ingredients the linearity of the wave equation guarantees that everything stays equally superposed forever.

    This way you end up at the many-worlds interpretation, which has been terribly mis-served by its marketing. I never liked it because of the wasteful proliferation of universes, and the arbitrary rules about when you do and don’t branch. Except, those are concerns brought on by terrible popular descriptions (not helped by its name). There’s only one universe, with all the different possibilities superposed but non-interfering (use of decoherence explains why that is). I’m still not sure I’m comfortable with it (more of an objective collapse guy) but at least, when described properly, it’s minimal and self-consistent.

Comments are closed.

Scroll to Top