Physicalist Anti-Reductionism

In a philosophical mood at the moment, because I’m about to head to Montreal for the Philosophy of Science Association biennial meeting. Say hi if you’re in the neighborhood! I’m on a panel Thursday morning with Nick Huggett, Chris Wüthrich, and Tim Maudlin, talking about the emergence of spacetime in quantum gravity. My angle: space is obviously not fundamental, though time might be.

Here’s a Philosophy TV dialogue between John Dupré (left) and Alex Rosenberg (right). They are both physicalists — the believe that the world is described by material things (or fermions and bosons, if you want to be more specific) and nothing else. But Dupré is an anti-reductionist, which is apparently the majority view among philosophers these days. Rosenberg holds out for reductionism, and seems to me to do a pretty good job at it.

John and Alex from Philosophy TV on Vimeo.

To be honest, even though this was an interesting conversation and I can’t help but be drawn into very similar discussions, I always come away thinking this is the most boring argument in all of philosophy of science. Try as I may, I can’t come up with a non-straw-man version of what it is the anti-reductionists are actually objecting to. You could object to the claim that “the best way to understand complex systems is to analyze their component parts, ignoring higher-level structures” but only if you can find someone who actually makes that claim. You can learn something about a biological organism by studying its genome, but nobody sensible thinks that’s the only way to study it, and nobody thinks that the right approach is to break a giraffe down to quarks and leptons and start cranking out the Feynman diagrams. (If such people can be identified, I’d happily join in the condemnations.)

A sensible reductionist perspective would be something like “objects are completely defined by the states of their components.” The dialogue uses elephants as examples of complex objects, so Rosenberg imagines that we know the state (position and momentum etc.) of every single particle in an elephant. Now we consider another collection of particles, far away, in exactly the same state as the ones in the elephant. Is there any sense in which that new collection is not precisely the same kind of elephant as the original?

Dupré doesn’t give a very convincing answer, except to suggest that you would also need to know the conditions of the environment in which the elephant found itself, to know how it would react. That’s fine, just give the states of all the particles making up the environment. I’m not sure why this is really an objection.

This is purely a philosophical stance, of course; it means next to nothing for practical questions. Nor does the word “fundamental” act in this context as a synonym for
“important” or “interesting.” If I want to describe an elephant, the last thing I would imagine doing is listing the positions and momenta of all its atoms. But it’s worth getting the philosophy right. I could imagine hypothetical worlds in which reductionism failed — worlds where different substances were simply different, rather than being different combinations of the same underlying particles. It’s just not our world.

33 Comments

33 thoughts on “Physicalist Anti-Reductionism”

  1. in the draft of a paper to be:
    “…Space expansion model privileges the atomic units, the only ones where physical laws are known to hold, but a fundamental question have not been answered yet:
    how can we distinguish between a Space expansion and a Matter contraction, once both can only appear to us as a Space expansion? ”
    (to be answered later this year, by a friend. The answer has a physical meaning, not a philosophical one).
    Time does not exist by itself, it is relative to matter «size». Uau,… is time relative? and matter size is not absolute?

  2. Amos Zeeberg (Discover Web Editor)

    Sean, isn’t there some randomness introduced by quantum mechanics, based on Heisenberg uncertainty? So you can never really have “another collection of particles, far away, in exactly the same state as the ones in the elephant,” right?

    And suppose that the tiny quantum randomness could be exaggerated by the complexity of the system of particles in the elephant, so that the behavior of the entire elephant system would not be predictable based on what you know about its initial conditions? Basically, quantum mechanics creates a tiny sliver of unpredictability, and chaos magnifies that tiny difference into a large difference at bigger scales and later times. If two elephants start out exactly the same, a quantum flip of spin in one electron could lead to one elephant dying of a heart attack, while the other lives a long, happy life.

    If that’s the case, then maybe some higher-level analysis might be able to provide more accurate predictions than the reductionist approach.

  3. Amos, to get it right you should really replace “positions and momenta of all the particles” by “the quantum state of all the particles.” That’s a very precisely-defined thing. Uncertainly only comes into the game when you are trying to measure some observable that doesn’t have a definite value in the quantum state. All very interesting and crucial, but not directly important for this reductionist/antireductionist debate, I think. If you got the quantum states right, the elephants would be indistinguishable.

  4. Fools rush in. Be warned that I don’t know what I’m talking about.

    However I think part of the objection to reductionalist thinking is that some areas of study are non-deterministic, and the behaviours of groups are one such area.

    For instance, I’ve heard objections to studies that incorporate Monte Carlo simulations. The objection was something to the effect of, you had to run 1000 simulations to get statistically significant data. Yet how do you know that 1000 runs are enough? How do you prove that more simulations are not better? Many of these studies use empirical measures and findings, so they find that actually 100 are enough, but then they perform a whole bunch more just to “make sure”. However the measures are not fundamental and the study itself cannot prove that it was “enough”, or even too much for that matter.

    Another thing. There is a well-known hierarchy of sciences, from most fundamental (physics) to chemistry, to biology, etc., with some thinking that the social sciences are the most “systems” based and complex to study. There is some sort of linkage to the scale of the phenomena being studied too although I don’t want to make too much of that.

    Well, I read E.O. Wilson’s Consilience. It contained a powerful criticism of the social sciences as being self-referential and not sufficiently “scientific” enough. This is pretty strong stuff. Some would take away a message that the social sciences are not scientific at all (I don’t think this was Wilson’s intent). Nor do I think that the attitude that social sciences are second-class is rare. The term soft sciences can be an indictment however subtle.

    Perhaps some of those opposed to reductionalist thinking, are reacting in some way against criticisms of the social sciences? I think that some or most anti-reductionalist thinkers think that the social sciences have gotten a bad rap.

  5. Your “sensible reductionist perspective” is typically what is meant by “physicalism”: everything supervenes on the physical.

    The anti-reductionism comes in many flavors, but usually what is meant is (at least) a rejection of the old logical positivist picture that higher level theories (e.g. biology) can be derived from physics.

  6. Re: #6:
    But that view, regardless of its philosophical legitimacy, simply isn’t factually supported. Look at the computer simulations that accurately predict protein folding structure from numerical evaluations and approximations of physics equations and principles. It’s become increasingly clear over the past few decades that any system can be derived from physics, if you have a big enough computer.

  7. @Brian: “Yet how do you know that 1000 runs are enough?”

    The same way scientists doing calculations always figure out how much is enough. Convergence. (Comparison to observation is another, which you mention.) This is why cosmological numerical simulation papers, for example, always talk about a few runs they did with higher resolution, or a larger box size. If the results change a lot, then you don’t have enough resolution/volume, and they can’t be trusted. If they converge, then you’re doing okay. (The model could still be wrong, but that’s a different question.) This isn’t new, and has nothing to do with Monte Carlo in particular. You have to do the same thing when deciding how far out to Taylor expand something, where to truncate any asymptotic approximation, how many powers of the fine structure constant to use in your QED calculation…

    All our calculations are model-dependent. There’s nothing special about Monte Carlo, except that it’s a much easier way of dealing with many complex situations/probability density functions. If you’re worried about technical things like errors in the coverage of your confidence intervals, there has been plenty of work by statisticians to figure out how that scales with N, so yes, you can tell how much is enough.

  8. My “sensible reductionist perspective” is a lot stronger than “physicalism.” I said “objects are completely defined by the states of their components,” not “objects are completely defined by their physical states.”

    It seems clear, under reasonable construals of “completely defined,” that if objects are completely defined by the states of their components, then any accurate higher-level picture would be completely dependent on what was happening with the components. That doesn’t mean that you can “derive” every interesting higher-level theory in practice, but it means that every higher-level theory is simply a useful repackaging of what’s going on at the lower level.

  9. I’m reminded of P. W. Anderson’s article “More is Different”. He takes as a given the “reductionist hypothesis”, which Anderson defines in the article as the idea that the workings of all things large and small are controlled by the same fundamental laws of physics. But Anderson argues against the “constructionist hypothesis”, which he defines as the idea that one could start from the fundamental laws and reconstruct the universe. In particular, Anderson emphasizes that larger scale structures may not obey the symmetries that occur in the fundamental laws, because of spontaneously broken symmetry. It’s a great article that I can’t do justice to in this summary; for anyone who hasn’t read it, it’s definitely worth looking up.

    Although Anderson characterizes his position as “reductionist” but not “constructionist”, it sounds a bit like the sort of “anti-reductionist physicalism” described by commenter #6 above.

  10. Sean, with regard to “objects are completely defined by the states of their components”, what about entanglement? In some cases you *can’t* separate the state of a multi-particle system into the states of its components.

    Although maybe I’m nitpicking, since we aren’t exactly running into macroscopic entangled states in our daily lives (hypothetical half-dead cats notwithstanding.)

  11. @ Sean (#9), you did specify components, but in these debates it is usually implicit that the physical states in questions in question are “micro-physical” states, which means that one is speaking of component “particles” (since the debates almost never address the field theory). So I took your claim to be essentially the same as a commitment to supervenience on the micro-physical.

    Although I should admit that I don’t really understand what you mean when you say that “objects are completely defined.” I take it your not thinking of a conceptual (or linguistic) definition here. I presumed that you were speaking of the state of some complex macroscopic system, and requiring that such macroscopic states be fixed by the states of the microphysical states of the components (which claim I would agree with), but perhaps I misunderstand you.

    However, you and I know (but most philosophers involved in these reduction debates don’t know) that this supervenience of composite states on the states of the components fails in the context of quantum mechanics (i.e., when we have entangled states). So interestingly, this sort of physicalism doesn’t strictly hold, but this fact is largely irrelevant for higher levels like biology and psychology (because it the states of these systems are determined by the states of their components).

    Most of the debate over reductionism comes in when we try to make clearer what counts as a “useful repackaging” and what counts as something genuinely novel. I tend to be on your side here, but I can see the force of the claim that oftentimes it is precisely the useful repackaging that’s doing the real explanatory work — and for this reason we should reject claims of explanatory reduction (i.e., that all explanations could in principle be eliminated in favor of physical explanations).

    @ Kevin (#7): No one denies that we can sometimes derive somethings (at least in principle) from the underlying physics. The question is whether everything can be derived (or explained). Here’s a standard example to give you a sense of the worry (Perhaps Dupre gives it in the video — I haven’t had time to watch it — which is why I tried to keep my earlier comment brief — he does discuss it elsewhere):

    Suppose that some particular rabbit gets eaten by some particular fox. Someone studying the populations of these organisms might explain this by pointing to the fact that the fox population is particularly high, and this makes it likely that any given rabbit will get eaten. A micro-physical account would be able to predict that that particular rabbit would be eaten by that particular fox. However, the population account tells us that even if the rabbit survived that particular encounter, it would be unlikely to continue to survive for long. The micro-physical account by itself doesn’t tell us that. Indeed, the micro-physics would be exactly the same if the fox population were low, but this one rabbit just happened to get very unlikely.

    The upshot is that it’s often very difficult (Dupre would say impossible) to say exactly what needs to derived, or to give an account of how such derivations would go. Indeed, a careful look at the case of reducing thermodynamics to statistical mechanics (which is usually taken to be the paradigm case of a successful reduction) highlights to sort of difficulties involved.

    I should probably mention that I’m more on Rosenberg’s side than Dupré’s in this debate; I’m a much stronger defender of (some forms of) reduction than most other philosophers. But even I don’t think we get full-blown derivability or explanatory reduction.

  12. Neither elephants nor any other macro “objects are completely defined by the states of their components.” The components that “define” an elephant (if that concept even makes sense) change from moment to moment. Yet the elephant’s overall structure and behavior remain relatively constant.

    This is not to imply that there is some mysterious elephantness force that keeps an elephant together. But it is to say that to provide a reasonable scientific explanation/description of how elephants behave one must talk about more than an elephant’s components.

    You seem to be granting that. But in granting that you are admitting into your ontology higher level entities. The higher level entities are the entities that the explanations/descriptions of elephants refer to. Doing so is a rejection of pure reductionism. In other words, you are not a reductionist. That’s fine. But I wish you would acknowledge that instead of complaining about it.

    Here are some more examples. How would a reductionist explain/describe evolution? How would a reductionist explain/describe the election we held yesterday? How would a reductionist explain/describe our current economic situation?

    It’s not just a matter of saying it could be done but it’s too complex. The fact is that there are no concepts at the level of elementary particles that can be used in those explanations/descriptions.

    To talk about these phenomena in any meaningful way one must talk about entities whose behavior is best explained/described in terms of them as primitives rather than in terms of the behavior of their components. The components of a dollar bill really have nothing to do with how people treat it–although they have everything to do with how it deteriorates over time, i.e., how its physical environment treats it. These are very different things. Why do you find it so hard to acknowledge that?

  13. Sean why do you think that “space is obviously not fundamental”?

    @7:
    I don’t know where you got the idea that computers accurately simulate protein folding but it is completely false. The most sophisticated modeling programs struggle with even simple proteins and the results are very crude and unreliable.

    Furthermore those programs are mostly based on empirical measurements and huge libraries of already empirically determined protein structures, actual physics plays relatively minor role so they certainly fail as examples of “biology derived from physics.”

    Now, I am not saying that it is impossible in principle, only that it hasn’t been done so far and won’t be done in the near future.

  14. Sean, I am curious if philosophers have considered dualities, bootstrap and similar ideas. If you have a dual pair of theories, the role is what is fundamental and what is composite switches between descriptions, and generically no one description is better than another. Seems to me the best way to sidestep this somewhat tedious issue.

  15. Al, just read Moshe’s comment — that’s most of the point of my talk. I think that some philosophers have thought along those lines, but it’s not common. A good number of them believe strongly that space is fundamental.

  16. I had in mind something even simpler: QFT in flat spacetime, where you can have solitons and fundamental quanta which presumably “make up” those solitons. But, which object is fundamental and which composite depends on the description. Different descriptions are more convenient in different situations, but none of them is more correct than the other. Emergence of space is related, but not precisely the same thing.

    (To get an intuitive picture of this, one has to first realize that the fundamental object of QFT is a quantum field, and point-like particles are a derived object, which is not always all that useful. But, this is a conversation for another time.)

  17. Sure, there are some great examples of soliton/particle dualities, which illustrate the basic point nicely. So I’ll begin my talk with the statement that “What is or is not fundamental is not fundamental.” Honestly I’m not sure what is fundamental, outside of maybe the Schrodinger equation (and there are plenty of equivalent formulations for that).

    But the most direct example is something like AdS/CFT, which makes the “space is not fundamental” point about as directly as you can imagine.

  18. Sounds like an excellent point to make, and not that easy to establish for people who are not used to it. Good luck!

  19. In agreeement with TimG, I also think that the statement “objects are completely defined by the states of their components.” does not account for quantum mechanical phenomena such as entanglement!

    I think that quantum mechanics is evidence that good science does not have to be single mindedly reductionist in its approach, and I have ocassionally wondered how one might go looking for new high level phenomena occruing in large collections of particles.

  20. Sean said:

    “I think that some philosophers have thought along those lines, but it’s not common. A good number of them believe strongly that space is fundamental.”

    The idea that space is not fundamental is common in Indian Buddhist philosophy. I’m not going to inventory the opinions of all schools of Indian philosophy but what comes to mind, off the top of my head…

    The first systematic form of Buddhist philosophy is called Abhidharma. It came into existence in India sometime BCE but the exact chronology is difficult to establish. The Abhidharmists agreed on general lines of method but disagreed on details. One major group, the Sarvāstivādins posited two theoretical entities which relate to what we normally talk about as “space”:

    – space (ākāśa)

    – the space-element (ākāśadhātu)

    They held that things like table, people, houses, cows are all composed off obstructive atoms of matter. The space-element was their explanation for any opening, expanse, empty region between the solid material things. So in a room, the walls, roof, floor are all made of obstructive atoms. However, the middle of the room, the space is composed of the space-element which is also atomic but non-obstructive. (By the way, I’m not aware of any discussion of space-element being specifically a gas.)

    Now, space (ākāśa) is not the same as the space-element (ākāśadhātu). The space-element is matter so it is displaced by other matter but space is immaterial. The Sarvāstivādins hold that space pervades all entities which enter into any spatial relationship. It is the necessary element which allows for any kind of spatial relationship to occur. It is equivalent to the idea of a container space in which things happen.

    Now, two groups of Abhidharmists reacted to the Sarvāstivādins: the Dārṣṭāntikas (which existed by at least the 2nd cent CE, probably earlier) and the Sautrāntikas (their major comprehensive work composed in the 5th century CE). Both held that space (ākāśa) is not an actual element of reality but just a way of speaking. They did not deny the space-element (ākāśadhātu), which is matter which can be obstructed by other matter but does not itself obstruct other matter. However, space, which is wholly immaterial, does not obstruct and is not obstructed, has no reality for them. So they denied the idea of a container space.

    To briefly talk of other groups, the Madhyamakas also deny that space is a fundamental element of reality. Same for the Yogācārins, who based some of their ontology on the Sautrāntikas. Overall, the idea that space is not fundamental is common in Indian Buddhist philosophy.

    (And time is fundamental to only a few Buddhist philosophers. The majority opinion is that it is not fundamental. It does not appear as an element of reality for any of the groups mentioned above.)

  21. As just a little aside, imagine an infinite collection of identical pairs of socks, say P[1], P[2], P[3] etc. Bertrand Russell is famous for, among other things, pointing out that without the Axiom of Choice there is no way one can select exactly one sock from each pair; i.e. there is no function F on the collection of pairs so that for every n, F(P[n]) is an element of P[n].

    Now imagine a world whose micro-states are grouped into disjoint macro-states P[1], P[2], P[3] etc. Suppose one of the laws of the macro-world is: if the world is in macro-state P[n] it will proceed to macro-state P[n+1]. Reductionism requires this law should emerge from a deeper law; i.e. there should be a transition function T on the micro-states such that if the world is currently in a micro-state S that’s an element of P[n], then it will proceed to the micro-state T(S) which is in P[n+1]. If there were such a transition function from which the macro-law emerges, then one could use it to recursively define a choice function F on the family of macro-states; i.e. let F(P[1]) be any member of P[1], and let F(P[n+1]) = T(F(P[n])). In other words: the Axiom of Choice (at least this version of it) is a consequence of reductionism!

    Without the Axiom of Choice reductionism might fail.

  22. Try as I may, I can’t come up with a non-straw-man version of what it is the anti-reductionists are actually objecting to

    Phil Anderson, one of the most prominent anti-reductionists, had something very concrete to object to. He testified before the Congress against funding the SSC.

Comments are closed.

Scroll to Top