Avignon Day 3: Reductionism

Every academic who attends conferences knows that the best parts are not the formal presentations, but the informal interactions in between. Roughly speaking, the perfect conference would consist of about 10% talks and 90% coffee breaks; an explanation for why the ratio is reversed for almost every real conference is left as an exercise for the reader.

Yesterday’s talks here in Avignon constituted a great overview of issues in cosmological structure formation. But my favorite part was the conversation at our table at the conference banquet, fueled by a pretty darn good Côtes du Rhône. After a long day of hardcore data-driven science, our attention wandered to deep issues about fundamental physics: is the entire history of the universe determined by the exact physical state at any one moment in time?

The answer, by the way, is “yes.” At least I think so. This certainly would be the case is classical Newtonian physics, and it’s also the case in the many-worlds interpretation of quantum mechanics, which is how we got onto the topic. In MWI, the entirety of dynamics is encapsulated in the Schrodinger equation, a first-order differential equation that uniquely determines the quantum state in the past and future from the state at the present time. If you believe that wave functions really collapse, determinism is obviously lost; prediction is necessarily probabilistic, and retrodiction is effectively impossible.

But there was a contingent of physicists at our table who were willing to believe in MWI, but nevertheless didn’t believe that the laws of microscopic quantum mechanics were sufficient to describe the evolution of the universe. They were taking an anti-reductionist line: complex systems like people and proteins and planets couldn’t be described simply by the Standard Model of particle physics applied to a large number of particles, but instead called for some sort of autonomous description appropriate at macroscopic scales.

No one denies that in practice we can never describe human beings as collections of electrons, protons, and neutrons obeying the Schrodinger equation. But many of us think that this is clearly an issue of practice vs. principle; the ability of our finite minds to collect the relevant data and solve the relevant equations shouldn’t be taken as evidence that the universe isn’t fully capable of doing so.

Yet, that is what they were arguing — that there was no useful sense in which something as complicated as a person could, even in principle, be described as a collection of elementary particles obeying the laws of microscopic physics. This is an extremely dramatic ontological claim, and I have almost no doubt whatsoever that it’s incorrect — but I have to admit that I can’t put my objections into a compact and persuasive form. I’m trying to rise above responding with a blank stare and “you can’t be serious.”

So, that’s a shortcoming on my part, and I need to clean up my act. Why shouldn’t we expect truly new laws of behavior at different scales? (Note: not just that we can’t derive the higher-level laws from the lower-level ones, but that the higher-level laws aren’t even necessarily consistent with the lower-level ones.) My best argument is simply that: (1) that’s an incredibly complicated and inelegant way to run a universe, and (2) there’s absolutely no evidence for it. (Either argument separately wouldn’t be that persuasive, but together they carry some weight.) Of course it’s difficult to describe people using Schrodinger’s equation, but that’s not evidence that our behavior is actually incompatible with a reductionist description. To believe otherwise you have to believe that somewhere along the progression from particles to atoms to molecules to proteins to cells to organisms, physical systems begin to violate the microscopic laws of physics. At what point is that supposed to happen? And what evidence is there supposed to be?

But I don’t think my incredulity will suffice to sway the opinion of anyone who is otherwise inclined, so I have to polish up the justification for my side of the argument. My banquet table was full of particle physicists and cosmologists — pretty much the most sympathetic audience for reductionism one can possibly imagine. If I can’t convince them, there’s not much hope for the rest of the world.

92 Comments

92 thoughts on “Avignon Day 3: Reductionism”

  1. I agree entirely with Sean. Those at that table advocating ‘something else’ or ‘something more’ for macroscopic complexity have the burden of showing that any microscopic laws of physics are violated or otherwise surrender their sufficient role in emergent complexity to ‘whatever else it is’, in other words, WHY the microscopic laws of physics (known and as yet unknown by us) isn’t enough. AND they have to identify – at least describe – what ‘it’ is supposed to be.

    The notion is nothing but speculation. Worse, it has the strong odor of intelligent design about it. Just another fantasy-trip that postulates the existence of a phenomenon or principle simply because we DON’T understand exactly how it all works in every particular.

  2. To misquote Rutherford, “Reductionism is the only real science. The rest are just stamp collecting.”

  3. I am not sure about emergent laws that are inconsistent with the Standard Model, but consider the following: In QFT the propagator of a massive scalar particle decays exponentially for spacelike separations. While it is thus technically never zero (and therefore would allow some form of superluminal signalling), we are OK with that because the decay is exponential and any finite size signal quickly requires unmanageable amounts of resources. It is the exponential scaling that make this palatable.

    Similarly, the explanation of an emergent law in terms of the Standard Model requires some algorithm (how else would you characterize an explanation with sufficient abstraction?). To me, it is not sufficient to say that the Standard Model explains emergent law X if all we know is that such an algorithm exists. We must also be able to follow the algorithm step by step, and this is where the computational complexity of the algorithm comes in. Just as in the case of the propagator between spacelike separated events, if the algorithm has exponential complexity the explanation is intractable, and therefore not really an explanation at all. In that sense I consider myself not a reductionist.

  4. Great to hear of this behind the scene discussion, thanks Sean. Turns out I was in the Palais des Papes just 2 weeks ago for the first time. I wouldn’t have thought of holding a scientific conference there. With hindsight the PONT organisers have got a point there (Hope they restored the “high kitchen” for the occasion). Two points:

    1. “an anti-reductionist line: complex systems like people and proteins and planets couldn’t be described simply by the Standard Model of particle physics applied to a large number of particles, but instead called for some sort of autonomous description appropriate at macroscopic scales.”

    i thought planets indeed had such an autonomous description, the evolution of which was governed precisely by a set of macroscopic-scale rules called general relativity.
    Would you say that reconciling Quantum physics with GR would put a nail in the intellectual coffins of your anti-reductionist colleagues? If yes, then on the other hand the fact that there are apparently deep contradictions between these theories might count as evidence for anti-reductionism..

    2. “If I can’t convince them, there’s not much hope for the rest of the world.” That’s exagerated I think: this crowd presumably also has higher standards of argumentation/skepticism. Maybe there’s no other way to know but to write the book.

    Anecdote: at the entrance of the palais, just after the casheers, one can see “e pluribus unum” enscribed on the stone ceiling. Capitole Hill appears to have borrowed something from Avignon.

  5. I also agree with Sean’s reductionist view, but the part that really fascinates me is how we connect (even in principal) the microscopic laws to our experience of free will. I’m no expert in this area, but I enjoy turning it over in my mind – can one reconcile this reductionist view with free will, or must we believe in a fully deterministic Universe, in which all of our decisions are predetermined by the microscopic laws?

  6. I think your position is inconsistent with the holographic principle and/or information rules. You are saying that all the information in the entire universe for all time is inside a space-time surface (“now”, defined somehow). To me if that is true then the holographic principle must be false – there is more information inside that surface than the HP says there can be. I also don’t know what you mean by “one moment in time” related to the universe. How are you defining that given GR? The wave function of the universe does not make sense in a GR context does it? That is one reason they are inconsistent right? What you say to me isn’t a statement that could be right or wrong without some more definition.

    On the information side, isn’t what you are saying a hidden variable argument? In many worlds we never know which world i.e which experimental result we will get – in principle – and so this is information which we (and no-one or nothing anywhere) can have.

    That is my initial thoughts.

  7. I think one can adhere to strict Copenhagen-ism and still be anti-reductionist. The idea that a superimposed waveparticle that has a wavefunction that collapses 50% of the time to A and 50% to B is just as reductionist as saying that 50% of the worlds are A and 50% of the worlds are B. The problem is that people think that reductionism answers the “Why?” to determinism’s “How?” I think that’s an entirely too teleological approach to the questions. Reductionism simply says that all processes are reducible to physical laws. You don’t get to complain that the physical law lacks a degree of teleology simply because you want there to be some. If you observe the waveparticle to be B and demand to know “Why not A?” that’s a question that is totally independent of reductionism.

  8. I don’t think there’s a problem between accepting the MWI and not accepting Reductionism as the whole answer. Here is a quote from David Deutsch, surely one of today’s strongest proponents of the MWI, regarding Reductionism:

    “A reductionist thinks that science is about analyzing things into components. An instrumentalist thinks that it is about predicting things. To either of them, the existence of high-level sciences is merely a matter of convenience. Complexity prevents us from using fundamental physics to make high-level predictions, so instead we guess what those predictions would be if we could make them– emergence gives us a chance of doing that successfully– and supposedly that is what the higher-level sciences are about. Thus to reductionists and instrumentalists, who disregard both the real structure and the real purpose of scientific knowledge, the base of the predictive hierarchy of physics is by definition the ‘theory of everything.’ But to everyone else scientific knowledge consists of explanations, and the structure of scientific explanations does not reflect the reductionist hierarchy. There are explanations at every level of hierarchy. Many of them are autonomous, referring only to concepts at that particular level (for instance, ‘the bear ate the honey because it was hungry’). Many involve deductions in the opposite direction to that of reductive explanation. That is, they explain things not by analyzing them into smaller, simpler things but by regarding them as components of larger, more complex things– about which we nevertheless have explanatory theories.

    For example, consider one particular copper atom at the tip of the nose of the statue of Sir Winston Churchill that stands in Parliament Square in London. Let me try to explain why that copper atom is there. It is because Churchill served as prime minister in the House of Commons nearby; and because his ideas and leadership contributed to the Allied victory in the Second World War; and because it is customary to honor such people by putting up statues of them; and because bronze, a traditional material for such statues, contains copper, an so on. Thus we explain a low-level physical observation– the presence of a copper atom at a particular location– through extremely high-level theories about emergent phenomena such as ideas, leadership, war and tradition.

    There is no reason why there should exist, even in principle, any lower-level explanation of the presence of that copper atom than the one I have just given. Presumably an active ‘theory of everything’ would in principle make a low-level prediction of the probability that such a statue will exist, given the condition of (say) the solar system at some earlier date. It would also in principle describe how the statue probably got there. But such descriptions and predictions (wildly infeasible, of course) would explain nothing. They would merely describe the trajectory that each copper atom followed from the copper mine, through the smelter and the sculptor’s studio, and so on. They could also state how those trajectories were influenced by forces exerted on surrounding atoms, such as those compromising the miners’ and the sculptor’s bodies, and so predict the existence and shape of the statue. In fact such a prediction would have to refer to atoms all over the planet, engaged in the complex motion we call the Second World War, among other things. But even if you had the superhuman capacity to follow such lengthy predictions of the copper atom’s being there, you would still not be able to say, ‘Ah yes, now I understand why it is there.’ You would merely know that its arrival there in that way was inevitable (or likely, or whatever), given all the atoms’ initial configurations and the laws of physics. If you wanted to understand why, you would still have no option but to take a further step. You would have to inquire into what it is about that configuration of atoms, and those trajectories, that gave them the propensity to deposit a copper atom at this location. Pursuing this inquiry would be a creative task, as discovering new explanations always is. You would have to discover that certain atomic configurations support emergent phenomena such as leadership and war, which are related to one another by high-level explanatory theories. Only when you knew those theories could you understand fully why that copper atom is where it is.

    In the reductionist world-view, the laws governing subatomic particle interactions are of paramount importance, as they are the base of the hierarchy of all knowledge. But in the
    real structure of scientific knowledge, and in the structure of our knowledge generally, such laws have a much more humble role.”

  9. Yes, to me, Sean, this is positively insane. Here is my short argument as to why this anti-reductionism is insane:

    If macroscopic systems behave in some manner that cannot (even in principle) be derived from the microscopic laws of physics, then that means that at least some small subsets of that macroscopic system must not be following the microscopic laws of physics. Because if all of the particles that make us up are, individually, behaving based upon the microscopic laws of physics, then by definition the total behavior is described by those same microscopic laws applied to a larger system.

    So what they are asking for is pretty absurd on its face: microscopic laws of physics that behave differently (and not just slightly) depending upon whether an atom is part of a larger configuration or not. This predicts that we should be able to examine progressively larger and more complex systems, and at some point find one where the individual parts of the larger system no longer follow microscopic laws.

    But here’s the kicker: if this happens, then it means that the microscopic behavior is being affected in a way that the interactions in the microscopic laws of physics that we know do not describe. Thus this is a statement that the microscopic laws of physics are wrong, and we aren’t taking into account the effect of some interaction or other with other particles. So we could, if we had perfect knowledge, write down new microscopic laws of physics that take these interactions into account, and have the same laws that describe our universe at all levels.

    Thus, fundamentally, what they are arguing for is a standard model that is wrong, and that extra long-distance interaction terms have to be incorporated to correct it. This seems rather absurd to me.

  10. Yeah, I’m definitely confused. How do you know which world you are going to end up in the MWI?

  11. “My best argument is simply that: (1) that’s an incredibly complicated and inelegant way to run a universe, and (2) there’s absolutely no evidence for it.”

    Your second claim is arguably incorrect. In “More really is different” (Physica D: Nonlinear Phenomena, Volume 238, Issues 9-10, 15 May 2009, Pages 835-839 or here on the arxiv: http://arxiv.org/abs/0809.0151), Gu et al. demonstrate that there indeed exist systems for which macroscopic observables cannot be computed, even in principle, from the microscopic state of the system. In particular, they study the infinite periodic Ising lattice, and show that in the ground state even quantities such as magnetization, correlation length, finite-range correlations or the zero temperature partition function cannot be computed from knowledge of the microscopic hamiltonian.

    There is the caveat that this is an infinite system, but much of our theoretical understanding of Nature comes from studying infinite systems or the continuum limit. Our theoretical understanding of phase transitions relies on studying infinite systems, for instance.

    So, this result provides some evidence for the claim that it may not be possible, even in principle, to compute all macroscopic properties of a system from knowledge of the microscopic properties.

  12. Jason,

    I agree with your comment. However, the question whether macroscopic systems behave in some manner that cannot (even in principle) be derived from the microscopic laws of physics is only one level of explanation. To my way of thinking, the other important question is whether once (even in principle) the behavior of physical systems are understood in this way, are we left with a complete explanation of what has occurred and why? I think that was the point the Deutsch was getting at.

  13. Dave,

    “How do you know which world you are going to end up in the MWI?”

    If one accepts the MWI, the short answer is in all of those initially correlated, and fewer of over time.

  14. Accepting MWI is already a stretch…. I have to admit myself that while scientifically it’s in the “I dunno” category, I believe (suspect?) that the Universe is fundamentally not deterministic, but stochastic. I don’t know if that makes me a Copenhagenist or not, but that’s my suspicion.

    However, that’s a minor quibble on the larger issue, which is whether emergent behaviors at larger scales (which indubitably exist) are not even **in principle** derivable from the laws of physics at smaller scales.

    There’s a deeper issue about the philosophy of what science is, though. Science aims at describing physical reality. But, if we’re to be honest, what we’re doing is making models that allow us to make predictions about physical reality. Do we really know that we’re right? In fact, we know that we’re not right, because for our fundamental theories, we can find regimes where they don’t work. Does that mean that in principle we can’t come up with a theory that does work everywhere? No… but at the moment, we certainly don’t have one.

    Given that, the MWI vs. Copenhagen thing becomes something of a red herring. The real result of quantum mechanics is that we can predict the probabilities for the results of (say) an electron spin experiment, but not what that spin will be measured to be. Whether that’s because there’s wavefunction collapse and the Universe is stochastic, or because the Universe splits, in some sense doesn’t matter. Each “you” (if there is more than one) measures a given spin, and there’s no way to figure out ahead of time which one that “you” is going to measure. Whether the Universe is MWI and exploring all possible outcomes, or whether it’s Copenhagen and performing an on going Monte Carlo experiment, in the end quantum mechanics really is just a mathematical model that does a wonderful job for us of calculating probabilities for the results of experiments (where “experiments” include any physical interaction, not just things done in a lab by people in white coats).

    Given that our theories are mathematical models, and that they all admittedly have a range of application, one could argue that it’s pure philosophical bias to assume that one will always be able to derive the theories that describe the behavior of macroscopic systems from the theories that describe the behavior of microscopic systems. Unless you really believe that your theory is Truth, instead of an extremely useful mathematical model, there’s no reason to suppose that that even *should* be possible.

  15. Rob,

    Although you’re clearly not a MWI fan, here are a few words from Deutsch on prediction and instrumentalism:

    “Our best theory of planetary motions is Einstein’s general theory of relativity, which, in the early twentieth century, superseded Newton’s theories of gravity and motion. It correctly predicts, in principle, not only all planetary motions but also all other effects of gravity to the limits of accuracy of our best measurements. For a theory to predict something “in principle” means that as a matter of logic the predictions follow from the theory, even if in practice the amount of computation that would be needed to generate some of the predictions is too large to be technologically feasible, or even too large to be physically possible in the universe as we find it.

    Being able to predict things, or to describe them, however accurately, is not at all the same thing as understanding them. Predictions and descriptions in physics are often expressed as mathematical formulae. Suppose that I memorise the formula from which I could, if I had the time and inclination, calculate any planetary position that has been recorded in the astronomical archives. What exactly have I gained, compared with memorising those archives directly? The formula is easier to remember – but then, looking a number up in the archives may be even easier than calculating it from the formula. The real advantage of the formula is that it can be used in an infinity of cases beyond the archived data, for instance to predict the results of future observations. It may also state the historical positions of the planets more accurately, because the archives contain observational errors. Yet, even though the formula summarises infinitely more facts than the archives do, it expresses no more understanding of the motions of the planets. Facts cannot be understood just by being summarised in a formula, any more than by being listed on paper or memorised in a brain. They can be understood only by being explained. Fortunately, our best theories contain deep explanations as well as accurate predictions. For example, the general theory of relativity explains gravity in terms of a new, four-dimensional geometry of curved space and time. It explains how, precisely and in complete generality, this geometry affects and is affected by matter. That explanation is the entire content of the theory. Predictions about planetary motions are merely some of the consequences that we can deduce from the explanation.

    Moreover, what makes the general theory of relativity so important is not that it can predict planetary motions a shade more accurately than Newton’s theory can. It is that it reveals and explains previously unsuspected aspects of reality, such as the curvature of space and time. This is typical of scientific explanation. Scientific theories explain the objects and phenomena of our experience in terms of an underlying reality which we do not experience directly. But the ability of a theory to explain what we experience is not its most valuable attribute. Its most valuable attribute is that it explains the fabric of reality itself. As we shall see, one of the most valuable, significant and also useful attributes of human thought generally, is its ability to reveal and explain the fabric of reality.

    Yet some philosophers, and even some scientists, disparage the role of explanation in science. To them, the basic purpose of a scientific theory is not to explain anything, but to predict the outcomes of experiments: its entire content lies in its predictive formulae. They consider any consistent explanation that a theory may give for its predictions to be as good as any other, or as good as no explanation at all, so long as the predictions are true. This view is called instrumentalism (because it says that a theory is no more than an “instrument” for making predictions). To instrumentalists, the idea that science can enable us to understand the underlying reality that accounts for our observations, is a fallacy and a conceit. They do not see how anything that a scientific theory may say beyond predicting the outcomes of experiments can be more than empty words. Explanations, in particular, they regard as mere psychological props: a sort of fiction which we incorporate in theories to make them more memorable and entertaining. The Nobel prize-winning physicist Steven Weinberg was in an instrumentalist mood when he made the following extraordinary comment about Einstein’s explanation of gravity:

    “The important thing is to be able to make predictions about images on the astronomers’ photographic plates, frequencies of spectral lines, and so on, and it simply doesn’t matter whether we ascribe these predictions to the physical effects of gravitational fields on the motion of planets and photons [as in pre-Einsteinian physics] or to a curvature of space and time.”

    Weinberg and the other instrumentalists are mistaken. It does matter what we ascribe the images on astronomers’ photographic plates to. And it matters not only to theoretical physicists like myself, whose very motivation for formulating and studying theories is the desire to understand the world better. (I am sure that this is Weinberg’s motivation too: he is not really driven by an urge to predict images and spectra!) For even in purely practical applications, the explanatory power of a theory is paramount, and its predictive power only supplementary. If this seems surprising, imagine that an extraterrestrial scientist has visited the Earth and given us an ultra-high-technology “oracle” which can predict the outcome of any possible experiment but provides no explanations. According to the instrumentalists, once we had that oracle we should have no further use for scientific theories, except as a means of entertaining ourselves. But is that true? How would the oracle be used in practice? In some sense it would contain the knowledge necessary to build, say, an interstellar spaceship. But how exactly would that help us to build one? Or to build another oracle of the same kind? Or even a better mousetrap? The oracle only predicts the outcomes of experiments. Therefore, in order to use it at all, we must first know what experiments to ask it about. If we gave it the design of a spaceship, and the details of a proposed test flight, it could tell us how the spaceship would perform on such a flight. But it could not design the spaceship for us in the first place. And if it predicted that the spaceship we had designed would explode on takeoff, it could not tell us how to prevent such an explosion. That would still be for us to work out. And before we could work it out, before we could even begin to improve the design in any way, we should have to understand, among other things, how the spaceship was supposed to work. Only then could we have any chance of discovering what might cause an explosion on takeoff. Prediction – even perfect, universal prediction – is simply no substitute for explanation.”

  16. …but can you be sure that that 4-dimensional curved spacetime is “real” on a deep fundamental level as opposed to being a model, an approximation to reality? Indeed, the inconsistency with QFT tells us that GR probably can’t be completely right.

    I fully appreciate and understand the value of explaination as opposed to mere prediction. A good theory gives you understanding about how things work, gives you intuition about what sorts of things can happen. However, we shouls not mistake the elegance and depth of our theories for evidence that they’re fundamental truth. They are far more than black boxes that tell us the results of experiments… but at the end of the day, they may just be useful models, and indeed with the theories we have today, we can be sure that some of them are exactly that. It may be that we’re all Kepler, just makung better and better models.

  17. Rob,

    Our best theories will always be an approximation (or “model” if you wish) of reality. As we address new problems and find new solutions, this will lead to new questions — not some final destination where all is revealed. But no one was arguing that point — nothing will ever be “completely right.” What we can achieve is an ever improving stream explanations that has infinite reach, subject only to the laws of physics, which impose no upper boundary to what we can eventually understand, control, and achieve.

  18. Doesn’t there need to be a distinction between a higher-level explanation being “in principle derivable from” a lower-level explanation, and a higher-level explanation being “consistent with” a lower level explanation? The first demands that the lower-level or fundamental explanation (or laws) be complete: that there are no other laws or causes not incorporated into the low-level explanation necessary to explain the higher-level phenomena. The second only demands that, whatever explanation is actually required for the high-level phenomena, it cannot be in contradiction with the lower-level laws. The consistency criterion seems to me to be much less demanding, and perhaps less controversial than the completeness criterion, though Braden B points to a paper that may provide an example challenging the viability of the consistency criterion. If you don’t keep these two different ways of thinking about what reductionism demands straight, I’m not surprised that the conversation seemed muddled and unsatisfactory.

  19. Although my head tells me to be a reductionist, I think that there is one way in which the anti-reductionist argument can be made logically sound *WITHOUT* appealing to either supernatural intervention from above (which includes, given our current state of knowledge, the ill-defined “free will”); nor to a miracle from below in which the macroscopic laws contravene or contradict the microscopic ones. The loophole is this:

    Consider whether it is possible that the macroscopic laws are completely consistent with the microscopic ones… but that the microscopic laws permit more than macroscopic history to emerge from the same microscopic conditions. Why is this odd-sounding idea even worth considering? Let me offer a couple of reasons, and even a possible mechanism.

    First, we know it does happen at the microscopic level: every quantum event is inherently indeterministic (even if you subscribe to MWI, you have no way of predicting which universe you will end up in). Of course, conventional wisdom is that quantum indeterminacy washes out by the time you get to the macroscopic, but it’s worth reminding ourselves that we don’t actually have a good explanation for how — or at what scale — this actually happens, i.e. the infamous Measurement Problem (and MWI is no less flawed here than is Copenhagen).

    Second, when you stop to think about it, multiverse theories of cosmology are already saying this, in more than one way. Whether it’s symmetry breaking in the early universe, or inflation blowing up quantum fluctuations to macroscopic scale, modern cosmology implicitly depends on more than one macroscopic outcome emerging from the same microscopic initial conditions.

    So how could this work in conditions less extreme than the birth of a new universe? One admittedly speculative mechanism could be the interaction of strong mixing and quantum events including random fluctuations. I suggest that under certain very specific circumstances, a quantum fluctuation can be “captured” by strong mixing, and be blown up to the classical level. This is essentially what happens with inflation, and it could happen on a much smaller scale too. Now in nearly all cases the quantum event won’t interact with strong mixing; and in those that do, in nearly all cases the strong mixing won’t be unstable and will die out; but if the quantum event happens close enough to the chaotic boundary of a complex system, it could push it into a different macroscopic outcome. (As an analogy, consider what happens with Hawking radiation when a pair of virtual particles is created just at the event horizon of a black hole).

    So to summarize, if this is feasible then we have:
    – Macroscopic and microscopic laws that are completely consistent
    – Macroscopic laws that can, in principle, be derived from the microscopic
    – Macroscopic *outcomes* that can, in certain very specific instances, at best be predicted only probabilistically from the microscopic laws
    – Paradoxically, macroscopic outcomes that can be deterministically predicted by macroscopic laws, at least so long as another mini-inflation event doesn’t kick them from one semi-stable state to another

    …and best of all, we even have the largest possible working example in the form of inflation, where the classical evolution of the universe we find ourselves in is the deterministic outcome of quantum fluctuations blown up to cosmic scale.

  20. I agree with Rob. Also, I don’t understand Mike: My consciousness only observes one reality at a time, not “all of them”. What scientific theory tells me which one I am going to be in next (other than giving me a probability distribution)?

  21. Low Math, Meekly Interacting

    I think the only justifiable position is an agnostic one, since, obviously, there’s no hard data either way. But, since we have to have provisional biases to even choose an angle of attack, so be it. Let the reductionists and the holists (or whatever) duke it out and see who’s right.

    What makes me a tad uncomfortable is the confidence displayed on either side. I just think it would be really interesting if there truly were “new laws” that drive emergence, rather than the the more prosaic perspective of the reductionists. I happen to think the reductionists are probably right. However, the problem of emergence does lead to the concern that being right in this context does one little good, i.e. if the computer that can realistically model a cell starting from Schrödinger’s equation must be as complex as the cell itself, such an approach is rather pointless, and you’re stuck having to just observe the cell directly. Even if there were no “fundamental” laws of complexity, but the holists came up with better models of complexity that were more general and predictive than what we have now in their pursuit of such laws, being “wrong” doesn’t matter.

    What’s great its that all of this is testable. For that very reason alone, it’s a worthwhile debate.

  22. Interesting. I certainly don’t believe in the wave function collapse in the traditional sense. It seems to me its an old fashioned way of talking about decoherence. What determines the manner of decoherence, is a bunch of accidents, and it seems to me these historical circumstances can be viewed as probabilistic events in the manner described by Bohr.

  23. Low Math, Meekly Interacting

    I guess anyone hoping to be persuasive in a debate like this must play devil’s advocate in the mirror and try to answer this question for themselves: How is it that there isn’t even a hint of the simplest self-organizing system evident in Schrödinger’s equation. Nor in Newton’s laws of motion, for that matter. How do you get from Schrödinger’s equation to ANY microscopic system that displays self-organizing behavior? Why, really, is this so hard? Saying it’s too complex is a tautology. So what’s the answer?

    Maybe it’s just due to lots of accidents which are hard to keep track of, but not at all profound. But how do we KNOW that?

  24. Dave,

    “My consciousness only observes one reality at a time, not “all of them”. What scientific theory tells me which one I am going to be in next (other than giving me a probability distribution)?”

    Generally, the MWI takes the view that you can’t predict with certainty which world you’ll end up in, since there was no reason “why” you ended up in this world, rather than another – you end up in a vast number of quantum worlds. It is an artifact of your brain and consciousness being differentiated, that makes your experiences seem singular and random at the same time. The randomness apparent in nature is a consequence of the continual differentiation into mutually unobservable worlds. And, since reality is quantum, and not classical, even if you knew the initial positions of everything in “your world” from the some arbitrary “beginning”, the randomness emerging as a consequence of such differentiation would still exist.

Comments are closed.

Scroll to Top