Quantum Hyperion

One of the annoying/fascinating things about quantum mechanics is the fact the world doesn’t seem to be quantum-mechanical. When you look at something, it seems to have a location, not a superposition of all possible locations; when it travels from one place to another, it seems to take a path, not a sum over all paths. This frustration was expressed by no lesser a person than Albert Einstein, quoted by Abraham Pais, quoted in turn by David Mermin in a lovely article entitled “Is the Moon There when Nobody Looks?“:

I recall that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I looked at it.

The conventional quantum-mechanical answer would be “Sure, the moon exists when you’re not looking at it. But there is no such thing as `the position of the moon’ when you are not looking at it.”

Nevertheless, astronomers over the centuries have done a pretty good job predicting eclipses as if there really was something called `the position of the moon,’ even when nobody (as far as we know) was looking at it. There is a conventional quantum-mechanical explanation for this, as well: the correspondence principle, which states that the predictions of quantum mechanics in the limit of a very large number of particles (a macroscopic body) approach those of classical Newtonian mechanics. This is one of those vague but invaluable rules of thumb that was formulated by Niels Bohr back in the salad days of quantum mechanics. If it sounds a little hand-wavy, that’s because it is.

The vagueness of the correspondence principle prods a careful physicist into formulating a more precise version, or perhaps coming up with counterexamples. And indeed, counterexamples exist: namely, when the classical predictions for the system in question are chaotic. In chaotic systems, tiny differences in initial conditions grow into substantial differences in the ultimate evolution. It shouldn’t come as any surprise, then, that it is hard to map the predictions for classically chaotic systems onto average values of predictions for quantum observables. Essentially, tiny quantum uncertainties in the state of a chaotic system grow into large quantum uncertainties before too long, and the system is no longer accurately described by a classical limit, even if there are large numbers of particles.

Some years ago, Wojciech Zurek and Juan Pablo Paz described a particularly interesting real-world example of such a system: Hyperion, a moon of Saturn that features an irregular shape and a spongy surface texture.

The orbit of Hyperion around Saturn is fairly predictable; happily, even for lumpy moons, the center of mass follows a smooth path. But the orientation of Hyperion, it turns out, is chaotic — the moon tumbles unpredictably as it orbits, as measured by Voyager 2 as well as Earth-based telescopes. Its orbit is highly elliptical, and resonates with the orbit of Titan, which exerts a torque on its axis. If you knew Hyperion’s orientation fairly precisely at some time, it would be completely unpredictable within a month or so (the Lyapunov exponent is about 40 days). More poetically, if you lived there, you wouldn’t be able to predict when the Sun would next rise.

So — is Hyperion oriented when nobody looks? Zurek and Paz calculate (not recently — this is fun, not breaking news) that if Hyperion were isolated from the rest of the universe, it would evolve into a non-localized quantum state over a period of about 20 years. It’s an impressive example of quantum uncertainty on a macroscopic scale.

Except that Hyperion is not isolated from the rest of the universe. If nothing else, it’s constantly bombarded by photons from the Sun, as well as from the rest of the universe. And those photons have their own quantum states, and when they bounce off Hyperion the states become entangled. But there’s no way to keep track of the states of all those photons after they interact and go their merry way. So when you speak about “the quantum state of Hyperion,” you really mean the state we would get by averaging over all the possible states of the photons we didn’t keep track of. And that averaging process — considering the state of a certain quantum system when we haven’t kept track of the states of the many other systems with which it is entangled — leads to decoherence. Roughly speaking, the photons bouncing off of Hyperion act like a series of many little “observations of the wavefunction,” collapsing it into a state of definite orientation.

So, in the real world, not only does this particular moon (of Saturn) exist when we’re not looking, it’s also in a pretty well-defined orientation — even if, in a simple model that excludes the rest of the universe, its wave function would be all spread out after only 20 years of evolution. As Zurek and Paz conclude, “Decoherence caused by the environment … is not a subterfuge of a theorist, but a fact of life.” (As if one could sensibly distinguish between the two.)

Update: Scientific American has been nice enough to publicly post a feature by Martin Gutzwiller on quantum chaos. Thanks due to George Musser.

95 Comments

95 thoughts on “Quantum Hyperion”

  1. Lawrence, it is still your brain that detects the ping and it must therefore be the case that there exists a quantum state of your brain that corresponds to haveing heard the ping. Now, you never find yourself in a suprposition between two possible experinmental outcomes like hearig a ping or not hearing a ping. We then have to conclude that while your brain can be in the state

    |ping heard) + |no ping heard)

    You always find yourself in either |no ping heard) or in |ping heard).

    So, there exists basis states in Hilbert space for your brain that correspond to well defined mental experiences.

  2. How come the “detector” D is what collapses a wave function (or whatever it is) and not other things that particles come in contact with? (For LBC and anyone interested.) For example, consider a photon reaching a beam splitter. We normally imagine, the wave is “split” by the BS instead of the photon deciding at the BS to go one way or the other – because we can get interference e.g. in a M-Z interferometer (like all A-channel hits.) But if the photon localizes at a D, why wouldn’t it do so at the half-silvered interface? It can’t be because the BS has no chance of absorbing the photon. Splitters are imperfect and we can even design one for say 20-40-40 performance (absorption and split.) So some photons are absorbed inside the silver, but others continue on as split waves – yet if they hit a fully absorbing detector surface later yet similar to the BS silver, they “hit” at some exclusive spot – why? And like I said, the detectors might be many km apart, no one ever dealt with the implications regarding “interaction” theories.

    I am suspicious of solution schemes like decoherence for such reasons, as well as the problems I brought up before such as when the emission time itself (like radioactive decay) is uncertain during a long period, if we arrange distant separation and isolation of mutually exclusive detection events, why do the waves ever even knot up so to speak at all if they stay waves, etc (none of them adequately IMHO addressed in comments here or anywhere else I have looked.) Just consider for example the issue of coherent superpositions versus mixtures: mathematically, wave amplitudes have a given unique value (are not “vague”, at least in any math we have) and just add up directly. There is no actual distinction between super’s and mixtures, it’s just a way of talking about combining unknown WFs or the time-averaged sort of outcome we get, etc. I mean, you can’t write the distinction as a wave description because that requires simply adding amplitudes. So, we can’t really “have” a mixture of x and y linear polarized versus a CS; the “mixture” is just a shorthand for our not knowing how the two are combined, what their phase is, etc. We just don’t understand, period, why and how systems of waves (which ought to just interact in a common space, and stay waves forever, just like classical waves on water) end up suffering “collapse” episodes during “measurement” – and neither terms in quotes can even be defined in terms of the maths of waves.

    Furthermore, I repeat: you can’t just blithely refer to “probabilities” connected to the waves, that is just something each of us observes in our unique outcome that we actually experience. That is the very thing needing explanation in the first place, it is a special interruption of pure wave evolution that nature (for real) and we (in the math) just “put in by hand”. It can’t be used as part of a circular argument to explain itself later in (IMHO) phony decoherence mumbo jumbo. Really, if the wave interactions on their own were enough to explain what happened, why didn’t earlier quantum physicists realize that? Why didn’t/don’t runs of wave evolution, even with all the extra influences, directly exhibit the collapse to specific points using the math applying to “wave evolution” itself? Instead, they need special convoluted machinations, contrived arguments with unexplained and undiscovered other “worlds” for things to happen in, and suspect phrases like “collapse appear to happen” (sophistry alert!)

    If you think the WF is just a way we calculate etc. then what do you think is the nature of what really goes through space from place to place, as “particles/waves” are generated and absorbed? Well?

    PS: Lawrence, did you work on the Wikipedia “Decoherence” article? I’m curious.

  3. Lawrence B. Crowell

    Count Iblis on Oct 31st, 2008 at 9:47 am wrote:
    Lawrence, it is still your brain that detects the ping and it must therefore be the case that there exists a quantum state of your brain that corresponds to haveing heard the ping.
    ——————–

    As far as as known the brain is a purely macroscopic or classical system. The temperature of the brain is too high for quantum coherence and action potentials and the activities of molecules, such as ATP —> ADP kinase activity is purely thermodynamic. Until demonstrated otherwise I don’t think there are quantum events in brains which correspond to quantum measurements.

    Lawrence B. Crowell

  4. Wow. To those of you writing civilly about the meat and gristle you work with professionally (and putting up with people’s editing and thinking mistakes that goes with typing into this sort of little box here), thanks!

    I like Lawrence B. Crowell’s take on things in this thread; in particular at comment 75 I think he is saying that we use formal tools to say as much as we can, as confidently as we reasonably can be.

    I have a few more stupid questions.

    Count Iblis re 76 – I have trouble with the idea of being *certain* that a brain is/was in one state or the other. If we look retrospectively at |no ping heard> can we really be sure that the experimenter wasn’t momentarily distracted from the machine that goes ping? Or her tinnitus? If she writes down the observation or non observation of ping in a log book, is that log book really reflective of reality at that time? Isn’t writing down “|ping heard>” really saying “I am reasonably sure I heard a ping moments ago”?

    I think adding in ears, brains and hands just introduces sources of error and uncertainty, and am tempted to think they sometimes get added into conversations about interpretation in part because their intrinsic physical complexity is distracting. Am I missing a good and obvious reason why they should be involved in recording the detector state? Are brains still part of collapse/decoherence/einselection when reading a more reliable, mechanical recording of the detector output at some point hours, days or decades after the experiment?

    Part of Neil B @ 77 seems to be asking about something similar, and somehow in my head this fits with the “we might instead ask why there is a classical world at all” in Otis @ 40 and LBC @ 59. Can one account in the same way for both an experiment which involves small numbers of particles in a small spatial volume, with careful attention to gravitational potential energies and the like AND observations or experiments involving large spatial distances and media that may range from the lithosphere to inter-galactic space? I am thinking less of quantum cosmology than of something like a distant (>> kpc) X-Ray source that reliably produces a small fractional crab that (thanks to a detector and some audio gear) produces a lot of “pings” within range of our hearing when the X-Ray source is above our detector. At a given time we expect a “ping” and write down |ping heard> if we hear it and |ping not heard> if we don’t. The recorded result obviously represents something real happened inside the body of the person writing it down, but can we say anything stronger than than it *probably* represents information about a distant particle event and the geodesic the event’s photon (may have) followed?

  5. Lawrence B. Crowell

    Neil B., no I didn’t write the Wikipedia entry on this topic. I have done a little editing here and there on Wiki-p, but nothing extensive.

    There were some ideas about “quantum brains” 10 years ago or more. Penrose sort of got this idea going, and I think the idea is probably flawed. We might suppose that if the brain were quantum mechanical that since our eyes are really extensions of the brain we would be looking at the world through an optical interferometer. The effective aperature of eyes would be equal to the distance between them. So in effect we would see the world through a telescope of sorts. We could see the rings of Saturn with our bare eyes. Of course that is an extreme case, but it makes a point. The brain is a warm system which is too messy for coherent wave functions to be running around.

    Lawrence B. Crowell

  6. The link to the article, “Is the Moon There when Nobody Looks?”, requires us to login. Is there anyway I can access it?

  7. Lawrence, I agree that the brain is typically not in a coherent state. But quantum mechanics applies perfectly well to non-coherent states. Since the world is described by quantum mechanics and not classical mechanics, we should not use classical physics when formulating fundamental concepts.

    Of course, the brain will in practice behave in a classical way. But since I am whatever my brain is computing and that computation must go into an entangled superposition with the environment like the “ping” or “no ping” case mentioned above, we must address the issue of the superposition.

    This simply leads to prefered basis states of the form:

    |my exact mental experience)

    These basis states form an incomplete orthogonal set of basis vectors for the brain. You would expect that there are many different brain states corresponing to exactly the same mental experience.

    Brody: I agree that mental experiences do not always give a perfect description of reality. All I’m saying is that since mental experiences exist and since the world is quantum mechanical, the mental experiences correspond to vectors in Hilbert space, even if you see something in a dream. 🙂

    To make my point in a different way: The fact that the system you are observing has already decohered before you make your observation does not mean quantum mechanics does not apply to.

    You are part of the environment too, so you will decohere too, i.e. you will also go into the entangled superposition. Now, as long as you don’t know the experimetal result, your mental state has NOT decohered with the rest of the universe, even though your brain will have decohered.

    If your mental state would decohere before you are aware of the measurement, then you could not exclude having psychic powers: When standing in front of the apparatus and blindfolded you could perhaps correctly guess the result.

    If we dismiss such psychic powers, then we must assume that in the complicated entangled superposition of apparatus, your brain, and the rest of the universe, your mental state factors out before you take a look.

  8. If I understand Coleman-Mandula correctly, then an information field would be viewed as a trivial connection between “space-time and internal symmetries”.

    This is what Lisi was thinking about I believe. However, what he failed to realize is that it is trivial because there are likely to be infinite numbers of ways one could make those connections, and it would be highly dependent on the coordinate system you chose to use (it would be relative).

    When we reach a certain threshold energy density it is likely that things that are trivially connected in our vacuum state are non-trivially connected, ie super-symmetric.

  9. Iblis @ 82:

    Now, as long as you don’t know the experimetal result, your mental state has NOT decohered with the rest of the universe, even though your brain will have decohered.

    Woah, what? How is one’s mental state physically separable from one’s brain?

    Also, what about those brains who simply aren’t likely to encounter the result — perhaps they aren’t even aware (in the conventional human-scale sense) that the experiment has even taken place? They don’t know the experimental results because they have not read about them, they weren’t there, etc. Okay, they may become “aware” in a physical (as opposed to common terminology) sense if they are in the forward light cone of the experiment, but is that awareness meaningful in a physical sense other than with respect to bounds on the earliest possible time at which that awareness could take place? (I think quantization implies that not all observers in the future light cone of a finite-scale event will receive photons or equivalents from that event, right?) If the information carrier goes from being fast moving particles (a detector with a blinking light across a room) to slow moving particles (a detector that sends a “ping” wave through a room of air) to slow moving macroscopic objects (journals, or even textbooks), is it useful to talk about the superpositions of the brains of the people who may or may not eventually receive that information?

    Since the world is described by quantum mechanics and not classical mechanics

    All I’m saying is that since mental experiences exist and since the world is quantum mechanical

    These are very strong statements.

    the mental experiences correspond to vectors in Hilbert space, even if you see something in a dream

    The fact that the system you are observing has already decohered before you make your observation does not mean quantum mechanics does not apply [to it].

    These are much less strong statements, and I would have no trouble with them if they were stated more like: “QM can be used to describe and analyse these systems formally in a useful way.”

    If in the two strong statements above these you meant: “QM can be used to describe and analyse these systems formally in a way that is more useful than classical mechanics”, I would feel more comfortable than a blanket claim that classical mechanics should be considered invalid and fully obsolete. I still don’t think I’d agree, though — isn’t statistical mechanics useful here too?

    Do you really, literally, mean that QM is fully and/or uniquely in correspondence with nature? Or do you mean that QM is such an accurate lens with which to study natural processes that it should be used even when it seems more awkward than other formal models?

    One of the problems here again revolves around the nature of the human brain and sensory organs when introduced into this sort of discussion. I’m with Lawrence B Crowell on this one, and deleted a paragraph rant while writing my previous comment that was about the visual phototransduction system. That system starts with an unpredictable and information-lossy molecular conformational change (of a retinaldehyde molecule absorbing a photon of the right frequency) which then follows one of two processes. Which process is followed has a probability that highly depends on the recent activity of the cell, which is highly correlated with photon flux (which in turn is highly correlated with ambient lighting). Both processes are limited by the thermodynamic migration of molecules through the cytosol (intracellular solution), both processes are unreliable, and each process may take an arbitrary time to change the charge on the cellular membrane. And all that’s before we even hit the very first neuron in a chain of lossy neurons leading to the primary visual cortex.

    Quantum chemistry can be a useful — and sometimes necessary — tool in studying systems like this in a reductionist way, sure. That’s what parts of molbio and biochem and chemical biology are about in practice, and there is the field of quantum biology too. But is exclusively resorting QM a good way of learning about a complex natural process like this? Lots of people studying small scale life sciences think quantum effects are trivial in practice. Even where quantum effects may be more interesting (like in vision, magnetoreception, photosynthesis and so forth).

    However, I would be more inclined to argue that with respect to things as complex as human vision we can only feasibly compute probabilities for small systems with current tools, and must resort to probabilistic or classical mechanics approximations in practice, with a cutoff at or just below molecular scale. These tools produce useful results, even if they don’t do anything close to quantum mechanical accounting.

    It’s true that uncertainty intervals also tend to increase as we look at larger pieces of the system — in paricular, the brain is so noisy (and dense and large) that we have nowhere near the ability to say definite — or even useful — things about the brain at scales much smaller than mm^3 * ms (for scalar data like volumetric flow, emissions, Doppler data, density and so on; at that resolution, brain 4-imaging is hugely expensive). It’s also true that reductionist approaches to studying individual brain cells and the molecules within them leads to useful results, and not treating quantum effects as trivial may offer some analytical or explanatory power that is otherwise inaccessible to biologists. (Quantum biologists studying work/energy transferases argue that, and I think so did Penrose & Hameroff in their arguments about quantum effects in nanofilimentary structures in neural dendrite structures and in the cytoskeleton, at least). That said, it’s hard to see QM win on a cost/benefit comparison with non-quantum models, independently of whether QM is “real” or “more real” or even just “more accurate” than the non-quantum models.

    Abstraction and information hiding is useful. It enables computational scalability. It is often useful to identify flaws in abstractions for a variety of reasons, but abandoning abstraction and dealing with all the raw data in a maximally-unhidden way seems like much harder work. (For amusement, David Madore, a French mathematician, explores mathematical abstraction elimination in a fun way here: http://www.madore.org/~david/programs/unlambda/#lambda_elim ).

    I don’t mean to discourage you from thinking of Hyperion->brains information flow quantum mechanically. (I have been enjoying and learning from this entire discussion!)

    In fact, I want to flip this whole comment on its head and ask you outright if you are using QM against the Hyperion(or whatever source)-detector-loudspeaker(or lamp)-sensory organ-brain chain of events as a way of thinking about QM itself, rather than about the system? I’m cool with that. I think that’s what I’m doing. 🙂

  10. Broady said
    “They don’t know the experimental results because they have not read about them, they weren’t there, etc. ”

    The “collapse is caused by consciousness” is a bit misleading.

    The underlying reality is that the collapse is a real effect. You build some machine that is capable of observing particles. The machine itself is interacting with the particle stream in such a way that there is a change in particle distribution when the machine cycles between on/off states.

    The output of the machine is a stream of bits. Pieces of information that only mean something based on the correlation of the bit with the presence of a particle. If the correlation is weak, the diffraction pattern will appear more “quantum” if it is strong then the pattern will appear more “classical”.

    There is no need for consciousness for this to occur, you could have a robot programmed to turn the machine on and off and be fully confident that the effect would be the same, even without viewing the output.

    Human brains are quantum mechanical in nature as was observed by Ebbinghaus in 1885. Our memories follow a natural decay function which can be affected by various other actions, but fundamentally underscores the random QM nature of the brain.

    http://en.wikipedia.org/wiki/Forgetting_curve

  11. Brody @ 79: “I like Lawrence B. Crowell’s take on things in this thread; in particular at comment 75 I think he is saying that we use formal tools to say as much as we can, as confidently as we reasonably can be.”

    Yes, using formal tools to say things as confidently as we reasonably can be: that’s the general spirit of scientific research! But formal tools apply as well to classical physics as to quantum physics. So if we reject reification for quantum physics, we should also reject it for classical physics. I’m always dubitative when there is an avoidance to think in terms of ordinary objects in quantum physics, while using them intensively in classical physics.

  12. I write some long comments so maybe I can just zero in on a couple of issues with concise questions.

    First, to reiterate: How come a conventional “detector” D is what collapses a wave function (or whatever it is) and not other things that particles come in contact with? For example, the beamsplitter in an interferometer. We know the photon wave splits there and does not collapse because it can interfere later.

    2. Lawrence, don’t confuse “real” per existence, about wave functions, with “real” number value versus imaginary. Our being able to assign complex values to the WF is just a procedure, it doesn’t mean nature can’t really hold such a thing. Remember that the complex value is use to show phase difference, which could be represented some other way. Indeed, one can use the analogous complex system to represent relative phase of electrical currents (phasors in that context), that doesn’t keep currents from being “real” per existence. The question still is, what is it that goes through space and how can it condense at a small space even when available detectors are miles apart with no chance of whatever “interference” the decoherence sophistry attempts to imply.

  13. Lawrence B. Crowell on Oct 31st, 2008 at 5:54 pm wrote:

    vvvvvvvvvv
    … There were some ideas about “quantum brains” 10 years ago or more. Penrose sort of got this idea going, and I think the idea is probably flawed … The brain is a warm system which is too messy for coherent wave functions to be running around.
    ^^^^^^^^^^

    I agree heartily with almost all of this. Penrose’s microtubules are very small, to be sure, but they are also quite massive in comparison to the scale of systems in which quantum effects plausibly apply at room temperature.

    The “almost all” qualifier is due to this: While matter is a very poor candidate for room temperature quantum, the same statement cannot safely be made for quasiparticles.

    (Brief background: Quasiparticles are energetic phenomena that are quantized “on top” of the ordinary matter. They are for the most part composed of energy, and so have very low masses, far less than those of electrons. This means conversely that quasiparticles can participate in quantum phenomena such as Bose-Einstein condensation at temperatures for which even a light-weight electron would behave classically.)

    Phonons are a good example. These quasiparticles are the quanta of sound, just as photons are the quanta of light, and they are fully capable of combining into coherent states within ordinary room temperature matter. If this were not the case, the well-known Mossbauer Effect could not exist.

    (Brief background: In Mossbauer, room temperature matter supports an extraordinary precise matching up of gamma frequencies between nuclear emitters and receivers. The gamma ray emissions and detections used require exceedingly precise frequency matches, so much so that a relative velocity of just a few centimeters per second is enough to squelch reception. Such precision is impossible in a fully classically room temperature system, since atoms and their nuclei move so quickly that the Doppler effect would blur the relative gamma frequencies of the emitter and receiver far beyond what is detectable.

    How then can the Mossbauer Effect even exist? One way to look at the situation is to picture the motions of individual atoms as being controlled by a spectrum of “quanta of vibrations.” These quanta range from no motion at all to very rapid vibration.

    Like most pure energy phenomena, phonons are bosons — that is, they obey the “let’s all get together” statistics of Bose-Einstein. Thus not only are the motions of atoms controlled by these phonons, but the phonons themselves can group together to create “super phonons” that all behave in exactly the same way.

    Of particular interest in this case is the ground energy set of such phonons for which motion is zero. This condensate in effect “freezes” a certain percentage of atoms in a material, even in one that is otherwise at room temperature. These non-classically motionless atoms are the ones capable of participating in the extremely motion-sensitive emission and receipt of gamma rays.)

    While I agree that existing models readily eliminate direct quantum behavior for objects as large as microtubules, this is not the same as proving that no quantum effects of any type are possible. A full proof must also show that there are no configurations of quasiparticles that could transfer of information from a quantum state back into the classical matter component of the system.

    The first problem with creating such a proof is the existence of the Mossbauer Effect, which shows that quantum-enabled point-to-point data transfers exist at room temperature. In the case of Mossbauer, such data transfers are enabled by the ability of very lightweight quasiparticles to form Bose-Einstein condensates at room temperature.

    At first it would seem easy to eliminate the relevance of Mossbauer. It does after all rely on nuclear isotopes and gamma rays. Caution is needed, however. The problem is that there is nothing in the physics of the Mossbauer Effect that requires use of gamma rays. The gamma rays of the Mossbauer Effect instead provide a convenient way to detect such effects due to the exceptionally sharp detection lines they produce.

    The hypothesis to be disproven, then, is whether there exist Mossbauer like non-classical transfers of data that follow the same mathematical model as Mossbauer, but which substitute phonon condensates of larger molecules for nuclei, and lower-frequency mechanical (heat) or electromagnetic vibrations in place of gamma rays to transfer data. The possibility of such effects would need to be disproved explicitly to eliminate the possibility of non-classical point-to-point data transfers in room temperature systems.

    Also, a full proof of the irrelevance of quasiparticle-mediated quantum effects in room temperature organic systems would also require a proof that quasiparticles cannot be used to construct qubits, or at least that any qubits constructed in such a fashion cannot then be linked back to the classical components of the system.

    If the possibility of molecular-level non-classical data transfers can be eliminated, I suspect that qubits would trivially fall as a direct consequence. On the other hand, if quantum enabled molecule-to-molecule data transfers can be shown experimentally to exist, disproving the relevance of room temperature qubits to organic systems becomes much more difficult. I suspect that if molecular non-classical data transfers and quasiparticle condensates exist, such components could also be configured to build qubits.

    In short: To complete the assertion that room temperature systems cannot include quantum behaviors, a rigorous analysis of the quasiparticle issue is required. Since non-classical data transfers via the Mossbauer Effect are part of accepted physics, such a proof would need explicitly to eliminate the possibility of translating the Mossbauer model to larger (molecular) units and lower frequency mechanical or electromagnetic phenomena. If such non-classical transfers of data are in fact possible, the proof would have to show that such transfers are irrelevant to the specific case of room-temperature organic systems such as the brain. Finally, if non-classical molecular data transfers are possible, a further proof would be needed that they cannot also be used to construct qubits, or alternatively that any qubits constructed from quasiparticles will be unable to transfer data back into the classical components of the system.

    Cheers,
    Terry Bollinger

  14. Lawrence B. Crowell

    Clearly overcomplete coherent laser states are a standard temperature example of where many particles (photons) enter into the same state with the same phase. Of course since photons are massless this is possible. The Massbauer effect, where the recoil response to the emission of a photon is from the whole lattice, is certainly an aspect of how a low mass particle can exhibit entangled or coherent behavior at a high temperature.

    Yet with neurons there are a number of problems. First off the idea that tubulins are quantum signal conduits is doubtful. These are the scaffolding of a cell, and where kinesin and dysin polypeptides walk up and down them. These are literally nano-bots of sorts which mechanically walk! They transport various compounds through a eukaryotic cell. Cells conduct their energetics through ion pumps across the membrane. Mitochondria pump protons across their membranes and the ion pump is the cell’s energy source. Similarly with neurons, an action potential is the offset and reset of the 1.6v potential difference across a cell membrane by the opening of Ca and K ion channel gates. These gates are receptors for certain chemicals such as acetylcholine, seritonin, dopamine etc. The action potential is a sort of wave, but it is one which is contantly pumped in a sense. So the action potential propagating down an axon or dendrite is not a conservative wave, but is more like a wave in the bobbing motion of a bucket being passed in a bucket brigade.

    Of course quantum mechanics has some role in biology, such as the hydrogen bond between purines and pyramidines in the DNA double helix. The action of a photon on a rhodopsin molecule in a retinal cell has some quantum mechanical interpretations, and so forth. Yet there is not much evidence for any quantization on the large. Of course this blog page was on the quantum properties of Hyperion, a large moon of Saturn, so quantum properties might percolate through quantum systems in certain ways we are not as yet aware of.

    Lawrence B. Crowell

  15. > Of course this blog page was on the quantum properties of
    > Hyperion, a large moon of Saturn, so quantum properties might
    > percolate through quantum systems in certain ways we are not
    > as yet aware of.

    Well, yes, I must confess right here to a bad case of off-topic-drifty-thoughtalism!… 🙂

    > First off the idea that tubulins are quantum signal conduits
    > is doubtful.

    (?) I would be blunter: Tubulins are complex and structural components that are flatly irrelevant to any serious discussion of whether quantum effects exist in organic systems. Focusing on them has held back for decades any serious analysis of whether or not quantum effects can impact in organic systems.

    Tubulins are irrelevant because they are too large and contain too much distinctive state information to participate in quantum effects. I suppose one could propose that quantum-capable quasiparticle waves exist within tubulins, but why in the world would one bother? They are the movable scaffolding of the cell, with well-defined purposes that require no other explanations, especially ones so far afield from their primary purpose. I have remained baffled for decades as to why a mathematical physicist as sharp as Roger Penrose’s has stayed locked in so adamantly to this very poor candidate for room temperature quantum effects.

    Let me be more specific about what I did mean:

    I am proposing that small molecules, such as ordinary water, have sufficiently small state spaces that even within a quite small volume plausible numbers of them could be assumed to participate in ground-state phonon condensates, in the same sense that nuclei do in ordinary Mossbauer. In the remainder of this entry I refer to this idea as low-energy Mossbauer, since it is not not so much a proposal of new physics — the math does not change in any fundamental way, for example — as it is a translation of existing physics from the energetic domain of nuclei to the lower-energy domain of molecules, with identical use of phonon condensates. While I’ve never seen (and never looked) for the idea in the literature, such extrapolations of scale are sufficiently straightforward that I do think some care is needed to eliminate the possibility.

    The second component of low-energy Mossbauer is another translation of scale: Instead of having the phonon-immobilized molecules exchange gamma rays, why not ask whether they might exchange lower-energy photons such as microwave or heat? Or for that matter, higher-order (non-condensate) phonons? This again is not so much new physics as it is a rescaling of the existing Mossbauer model to a lower-energy domain.

    The measurable effect of this low-energy Mossbauer Effect would be anomalously high rates of transfer of exact rotational or vibrational molecular energy between molecules at distances (e.g., inches) that are vastly larger than could be explained using a fully classical model. The fully classical model would in contrast predict only noise and locally mediated (molecule-to-molecule) energy transfers in such situations.

    In contrast, low-energy Mossbauer predicts the existence of a fairly large class of “impossible” transfers of energy between identical molecules at large distances from each other. The transfers would only occur for the primary rotational and vibrational modes of each class of molecules involved. If the transfers occur at all, the frequencies involved would be very precise, just as in Mossbauer. Finally, the distances involved in the transfers would be far larger than the radii of local thermal (and thus fully classical) effects.

    Low-energy Mossbauer Effects would typically be masked thermal noise, since unlike the classical Mossbauer Effect the photons exchanged would be comparable to those of the noisy environment in which they exist. This means that some non-trivial experimental care would be required to detect them. Still, I suspect a clever experimentalist could come up with a good (and probably even cheap) way to look for such effects, since in particular the frequencies involved would be both well-known and would necessarily have sharply defined peaks, like Mossbauer.

    Where I wonder about the plausibility of low-energy Mossbauer, though, is that it seems unavoidably to imply some pretty odd constraints on some very well-studied systems. Take water, for example. The existence of low-energy Mossbauer in ordinary water would unavoidably imply that a glass of drinking water contains molecules that are “stitched together” by networks of photon and possibly phonon exchanges that cannot be modeled classically.

    At the very least, the existence of such networks in water would have entropic implications, since information would constantly be exchanged in non-local ways that would be better modeled using a collection of simultaneous and intermixed Bose-Einstein condensates. The decay of such structures would necessarily take longer than is possible with a purely classical model, and so would give the water a sort of “memory effect” that should not be there. Also, since different condensates could exist at the same time, a new range of variables would be introduced in which one glass of water is no longer the “same” as another that has a different condensate configuration.

    I would think that such effects would have been noticed by experimentalists, at least peripherally. If no such effects have ever been seen, this would argue against the existence of low-energy Mossbauer Effect.

    Regarding Hyperion: Covered that in my first entry, seriously I did. I just prefer Dr Feynman’s terminology and perspective. Decoherence is fine, but I think it’s fair to say that it is really is just another way of describing how information emerges from a quantum system.

    Regarding the excellent question of how a photon “decides” whether to be absorbed by one atom (an information-creating event that destroys coherence) or reflected from a huge array of atoms (coherence is maintained):

    If you have not already, be sure to pick up a copy of Feynman’s “QED: The Strange Theory of Light and Matter.” Snip out Zee’s intro (just kidding… no, actually, I’m not) and settle in for a good read. Not only will this book _not_ answer your question, it will leave you more frustrated than before. This is what is so great about it! Feynman pulls no punches in describing how difficult and deep your question truly is. Yet bizarrely, by the end of his book he will nonetheless given you the ability to calculate, in principle at least, exactly how many photons will “decide” to do one or the other, for any imaginable experimental setup.

    Feynman also points out that even Isaac Newton pondered your question and realized how profound it is –- a remarkable achievement for someone who lived hundreds of years before quantum mechanics came into being.

    And if you want to know how Newton managed to contemplate such absorption probabilities for a particle that was not known to exist until Einstein postulated it (his Nobel Prize was for that work, not relativity)… why, then, read QED! (And no, I don’t get a cut, I just like the book a lot.)

    Cheers,
    Terry Bollinger

  16. Lawrence B. Crowell

    Terry Bollinger: “imply that a glass of drinking water contains molecules that are “stitched together” by networks of photon and possibly phonon exchanges… ”

    That might happen with ice.

    The problem is that for this sort of physics to take place it would have to involves the quasi-crystaline structure of polypeptides. Of course biology is not compatible with gamma rays. There are phonon physics associated with how replicase moves on a DNA strand. The ATP to ADP energy exhange with each step causes a quanta of vibration to move along the 5′-3′ strand which by recoil bumps the replicase to the next nucleotide. Okazaki fragments for the 3′-5′ strand replication are put together by more standard chemical processes.

    Polypeptides are quasi-crystaline (like) structures. In fact DNA has a 10 nucleotide per 2-pi twist in the A and B conformational forms, and this is also mirrored in dihedral angles in some polypeptides. I think this has some connections with fractal geometry and chaos theory, which if there are quantum aspects to their physics leads to a huge area largely not well known. I could go on about this at considerable length, but work and time (and this is election day) preclude that possibility for now.

    Lawrence B. Crowell

  17. Quick comments: Your ice idea is interesting! It is also closer to traditional Mossbauer, which also uses solids.

    If you are suggesting that liquid-to-solid phase transitions in general could be interpretable as including coherent Bose-Einstein phonon condensation components… that would be an interesting alternative on how to view crystallization, and certainly not one I ever recall bumping into.

    Here’s a bit of elaboration on that idea, using brainstorming mode (by which I mean exploring the concept space, but not yet attempting to quantify or disprove the theorem): Crystal faces are macroscopic results of nanoscale assembly, with ratios of emergent sizes to creation component sizes that are truly astronomical. Could these emergent features be coordinated in part by the unrecognized existence of large-scale phonon Bose-Einstein condensates during the crystallization process?

    For example, the assembly components that generate natural beryl crystals are in the Angstrom range, consisting of beryllium, aluminum, silicon, oxygen, yet large natural crystals of beryl can have very flat faces in the order of a meter across. That means a highly parallelized atomic-level crystallization process can easily generated well-defined emergent structures with features 10 orders of magnitude larger.

    A comparison: This is roughly the same as 1 millimeter ants paving all of Asia, Europe, and Africa with a platform that remains level over that entire area during the construction process. Pretty decent group coordination, that!

    (Quick critique: Extremely high relative reaction rates at the layer-addition shelf edges versus the flat surfaces may sufficient to explain the planar face. Thus the quick phonon idea could possibly be whacked away using Occam’s Razor and no some good reaction rate data.)

    (Quick counter: The reaction data may inadvertently _include_ hidden coherent phonon that have never been recognized, and thus never adequately analyzed. Familiarity and an unexamined assumption that “this is all well known stuff” could be hiding an interesting phenomenon that has not been adequately examined or quantified.)

    Brainstorming mode again: Water should be largely transparent to microwaves, since when it is hot it gives off radiation that is mostly in the much higher infrared range (the heat one feels when you put your hand near, but not directly over (that’s steam), a hot cup of coffee. Why do microwaves then heat water so well? To put the issue in terms of an analogy, microwaves heating water to the point where it gives off infrared radiation is a bit like shining an infrared heat lamp on a piece of goal and causing the coal give off blue light. There’s a definite “upping of frequencies” that at a first approximation is a bit hard to explain.

    Theorem: The microwaves are actually interacting with a patchy network Bose-Einstein phonon condensates. Many of these phonon condensates include sufficiently large total masses of water molecules that they resonate easily with the comparatively low frequency microwaves. The heating simultaneously causes the condensates to break down, resulting in molecular-level “pieces” (water molecules) whose vibration frequencies are much higher, in the infrared range.

    Your comments on long-range order: 1D and 2D constrained systems should encourage Bose-Einstein condensation. Some of Peierl’s early work (he actually got a lot of that from a German fellow whose name escapes me at the moment) on 1D effects that lead to alternating single-like and double-like bonds in long polymer chains (they are actually quasiparticle bonds composed of Fermi sea waves, and are _not_ really localized electrons) comes to mind, although that is in the fermion domain mostly.

    Cheers,
    Terry

  18. Lawrence B. Crowell

    Microwaves heat water because they are resonant with lots of tightly spaced vibrational modes of the molecule’s dipole moment. The H_2O molecule appears as a “Mickey Mouse” head-like structure, where the hydrogens form the “ears.” There are two filled p orbitals that stick out in the opposing directions, giving rise to a tetrahedral-like structure. In this case two of the vertices are positively charged, where the H-atoms are, and the other two vertices (p-orbitals) are negatively charged. The oxygen sits near the barycenter of the tetrahedron. A microwave field will then interact with this system as two dipoles or a net quadrupole, which causes the vertices to oscillate and the tetrahedron is deformed by being periodically squashed and distended in resonance with the microwave field. So each atom is vibrating in response to the field and they collide with each other, converting this vibrational energy to translational energy in the motion of the molecules. Statistically this then heats up water.

    It is one reason that ice is harder to heat up in microwaves. Since the molecules are bound in a crystalline lattice the conversion of vibrational energy to translational energy is less efficient. As a result the H_2O atoms saturate quickly with vibrational energy and do not absorb as much microwave energy. For this reason the defrost cycle on microwave ovens is a lower setting, turning on and off the magneton so the fields in the cavity don’t feedback too much. The magneton is feathered to give the ice more time to thermalize its vibrational energy.

    The idea for the microwave oven came with radar during WWII. The large antennas tended to collect lots of dead birds. This messy problem was found to be caused because the birds sat on the antenna and got cooked.

    I am a bit of a maven for polytopic geometry. I highly recommend Coxeter’s book on convex polytopes. Then if you want to really grab this business by the horns Conway & Sloane’s “Sphere Packing, Lattices and Groups” is recommended. This book gives a decent account of lattice systems, such as E_8 and the Leech lattice, and systems of quaterionions these imply. This then leads up to the Conway & Fischer-Griess group called the “Monster.”

    I think these structures are involved with quantum gravity, or in the lattice-tesselation of spacetime and AdS space. The vertices of the lattice system are roots of the gauge group. It is a sort of solid state physics analogue to gauge theory and gravity. I will leave things at this point, lest I am accussed of “theory mongering.”

    The occurrence of large crystals is largely a matter of energetics. In an adiabatic situation atoms will align into cyrstals because that is most energetically favorable. It is interesting to note that selenide crystals of truly astounding proportions were found in a mine in Mexico. There are pictures of the cave explorers literally crawliing on them and rappelling off them.

    Lawrence B. Crowell

  19. This would be a great place to repost an adaptation of my comment from Uncertain Principles. Chad Orzel brought up the issue of Warnock’s Dilemma in a recent thread, http://scienceblogs.com/principles/2008/11/links_for_20081110.php. Wikipedia, the free encyclopedia:
    “[T]he problem of interpreting a lack of response to a posting on a mailing list, Usenet newsgroup, or Web forum. It occurs because a lack of response does not necessarily imply that no one is interested in the topic, and could have any one of several different implications, some of which are contradictory. Commonly used in the context of trying to determine why a post has not been replied to, or to refer to a post that has not been replied to.”

    My response is below, adapted to the current thread. BTW the discussion in the thread “What’s the Matter with Making Universes?” is directly pertinent to the one here, why not get some word in there too?

    I propose Bates’ Corollary to Warnock’s Dilemma: the problem of interpreting a lack of response to a comment in a thread and not just a post. I also propose Bates’ Ancillary Dilemma: why do respondents (repliers? sorry) address some of the key points made in a post or comment, and not others – even if the poster/commenter pleads or insists, and even repeatedly, that the unanswered points are relevant or even more relevant? I have in mind, in the thread http://scienceblogs.com/principles/2008/11/whats_the_matter_with_making_u.php#commentsArea, that [no one here AFAICT] would address my concern about why “collapse” (or whatever) happens so far downchain in the interaction of say a photon, instead of earlier. In particular, why doesn’t the interaction with an initial beamsplitter cause a photon to just collapse and go one way or the other, instead of indeed “splitting” the single photon wave to enable subsequent interference. But then, at the detectors at the far end of the MZ interferometer etc., there is a “hit” at one or the other detector. Er, maybe if [anyone, such as LBC, Terry B?] is reading this comment, you could reply to that question? I thank you in advance for your cooperation 😉 .

    BTW Lawrence I can be hard on people putting forth what I consider contrived and rationalized attempts to solve problems, which I still think fairly characterizes “decoherence” as a putative explanation of collapse <in general or even “apparent collapse” (whatever that means.) But I do not think you or others are arguing in bad faith or anything like that. I think you just feel too attached to a false hope that is alluring because it seems to resolve a vexing issue, and because the vagaries of meaning in talking about wave amplitudes, probabilities, etc,, lend themselves to misdirection and contrivance. BTW’, the whole idea of “entanglement” is that the mingled photon states literally don’t have a definite polarization in any individual sense, but the polarization is only established as a correlation upon later measurement. Hence I don’t see how entanglement can become a model or metaphor for collapse in general, which usually involves definite wavefunctions (such as 20 degree linear polarization as produced) collapsing into x or y, etc.

  20. =========================================================

    A Quick Visual Intro to QED
    Terry Bollinger – 2008-11-23

    — Part 1 of 2 —

    1. The Question: Why Waves Here, and Particles There?

    On November 10, Neil Bates asked a difficult question as part of the Discover Magazine “Quantum Hyperion” physics thread. My paraphrasing of his question is this:

    “Why does a photon behave like a wave when it encounters beam splitter, but like a particle when it encounters a particle detector?”

    Below is my attempt to answer this question. Since this will be a bit long, I’ll break it up into two parts.

    The first part (this one) deals with the mystery of the coupling constants, or what I refer to as the roulette wheels down at the bottom of quantum mechanics.

    In the second part I’ll discuss the clockwork photon. This is my adaptation, with a few visualization updates, of Feynman’s explanation of QED. My goal in Part 2 is to show how geometry transforms the simple probabilities of coupling constants into the richness of the physical world that we see all around us.

    To make Part 2 more specific, I’ll include a thought experiment in which a single material, silver, both reflects a photon as if it were a wave, and in another part of the apparatus absorbs it as if it were a particle. Using a single material for both components emphasizes the critical role that geometry plays in understanding quantum mechanics.

    2. The Roulette Wheel at the Bottom

    The best non-mathematical reference to the question of why quantum mechanics sometimes gives wave-like results and sometimes particle-like results is, without qualification, Richard Feynman’s “QED: The Strange Theory of Light and Matter.” I recommend it highly for anyone interested in the more mysterious aspects of how quantum mechanics works.

    In his book, Feynman quickly informs the reader without apology that he will not try to explain why reality is ultimately probabilistic. His reason is simple: Although quantum mechanics enables very accurate predictions of how particles such as electrons and photons will interact when in large groups, there is no accepted theoretical explanation for the ultimate source of the probabilities that are intrinsic to such models.

    An analogy is that there is a sort of roulette wheel at the very bottom of the physics of electrons and photons. This wheel is spun every time we ask a question about a specific electron and a specific photon, but the details of its construction remain a complete mystery to us to this day.

    (To be complete, I should mention that there are actually several such roulette wheels in physics, which are collectively known as “coupling constants.” Only the coupling constant for electrons and photons has much impact on everyday physics, however, so that is the only constant I will discuss here.)

    Spinning the roulette wheel for an electron and a photon results in one of two outcomes: “interact” or “ignore.” (A warning: There are some complications in how these values are used. I’ll describe those complications later, in Part 2.)

    For photons and electrons, an accident of physics history bequeathed the corresponding roulette wheel with the highly uninformative name of “fine structure constant.” Fortunately, there is another name for it that is much more intuitive: It is the charge of an electron, expressed in certain universal units.

    Specifying how much electrical charge an electron has thus is just another way of describing the odds that the electron will interact with a passing photon. A point particle with no charge would ignore such a photon entirely, since its roulette wheel would be rigged only with slots marked “ignore.” Such a particle does exist. It is called the neutrino, and it is rigged in just this way. Because a neutrino cannot see photons, it passes through ordinary matter pretty much as if it wasn’t even there.

    The roulette wheel that corresponds to the charge of an electron has a surprisingly small number “interact” slots. The number is about 1 in 137, or less than 1%. This small probability is nonetheless just the right size to give rise to all of the remarkable complexity that we see and interpret as non-nuclear physics and chemistry.

    Finally, I cannot emphasize enough that the underlying design of these roulette wheels — these coupling constants of standard physics — is unknown. Attempts to postulate “gears and wheels” to explain these probabilities always seem instead to end up adding complexity without adding any new insights — the sure sign of a bad theory. This is one of those interesting cases in physics where the most mathematically abstract model, in this case a simple probability function, stubbornly remains the best one available. This is true both in terms of overall simplicity, and in terms of its ability to produce verifiable experimental predictions. The probabilistic nature of coupling constants thus remains a true mystery, one into which the physics of Feynman’s time (and I would argue ours also) produced no significant insights.

    2. Charge and the Anthropic Principle

    I should mention this seemingly arbitrary setting of the photon-electron roulette wheel at 1 in 137 is quite special in some unexpected ways. For example, if you increased it to 1 in 135 or lowered it to 1 in 138, it’s a pretty good bet we would not be having this dialog. The problem is that the ability of carbon to form indefinitely long chains is closely linked to this number, and if you change it even slightly, organic chemistry would likely stop working well enough to support the existence of constructions such as the proteins necessary for life.

    As it turns out, pretty much all of the fundamental constants of physics seem to work that way. That is, if you make these seemingly arbitrary numbers just a larger or a little smaller, you still get a universe of some sort, but one that no longer supports organic life as we know it. Or to put it a bit more graphically, nudging fundamental constants is a lot like kicking the foot of a juggler who has ten plates and twelve hoops all spinning at once: Everything comes tumbling down.

    This curious link between fundamental physics and life-supporting organic chemistry is called the anthropic principle, and it is one of the most fascinating mysteries of current physics. It is a topic for another time, however. I just did not want to leave an incorrect impression that the value of the electron charge could have been set arbitrarily to almost any value. It is instead fine-tuned in ways that are unexpected and deeply interwoven with the other fundamental constants of physics. Developing a full and convincing explanation of this fine-tuning constitutes one of the great ongoing challenges of fundamental physics.

    — End of Part 1 —

    =========================================================

Comments are closed.

Scroll to Top