242 | David Krakauer on Complexity, Agency, and Information

Complexity scientists have been able to make an impressive amount of progress despite the fact that there is not universal agreement about what "complexity" actually is. We know it when we see it, perhaps, but there are a number of aspects to the phenomenon, and different researchers will naturally focus on their favorites. Today's guest, David Krakauer, is president of the Santa Fe Institute and a longtime researcher in complexity. He points the finger at the concept of agency. A ball rolling down a hill just mindlessly obeys equations of motion, but a complex system gathers information and uses it to adapt. We talk about what that means and how to think about the current state of complexity science.

david krakauer

Support Mindscape on Patreon.

David Krakauer received his D.Phil. in evolutionary biology from Oxford University. He is currently President and William H. Miller Professor of Complex Systems at the Santa Fe Institute. Previously he was at the University of Wisconsin, Madison, where he was the founding director of the Wisconsin Institute for Discovery and the Co-director of the Center for Complexity and Collective Computation. He was included in Wired magazine's list of "50 People Who Will Change the World."

9 thoughts on “242 | David Krakauer on Complexity, Agency, and Information”

  1. Re: Maximum feature emergence between low entropy and low Kolmogorov complexity

    Sean,

    I noticed from this podcast with David Krakauer and from a Gifford (?) lecture you gave some time ago that the question of how to analyse the arc of the complexity spectrum is something that interests/bugs you. I have written a paper which does that.

    In a nutshell, if a model or classification is defined as a partition of the Cartesian space of the underlying variables then the optimal model is where the sample data points are concentrated in small components and rare in large components in a way that can be defined explicitly in a entropy expression similar to relative entropy. Essentially the expression is optimised where the derived variable entropy is low (between components) and the underlying variable entropy is high (within each component). The paper shows that artificial neural networks, for example, maximise this expression and that is why they work. I have implemented a model search algorithm based on a statistic similar to mutual entropy which also maximises the entropy expression.

    The paper and the implementations are at my (non-commercial) website greenlake.co.uk.

    Cliff

  2. Pingback: Sean Carroll's Mindscape Podcast: David Krakauer on Complexity, Agency, and Information - 3 Quarks Daily

  3. I greatly enjoy the Mindscape podcast in general, and I think Sean did his best to makes this conversation fruitful, but unfortunately, Krakauer is a terrible interviewee. Yes, there is a difference between the study of things that do not encode information about the world and the study of things that do, but beyond that, what are the similarities across complex systems such that there could be anything called “complexity theory”?

    Sean asked several good questions — about emergence, downward causation, and so on — and Krakauer’s responses were little more than dodges. Consider Krakauer’s response to Sean’s question about strong emergence and new laws of physics at the macro level that influence what is happening at the micro level:
    “I’m against it and I’ll explain why I’m against it …. it’s greedy. Those new laws of physics are called English literature, [laughter] or musical composition or metaphysics or carpentry. And it’s not a new law of physics, it’s a new theory. And it might have laws in it, I’m not sure, may have rules in it, might be a little bit more modest….”

    Just no. You can say we don’t really know the answer to the question, that is, we don’t know how macro states such as the mental state a poem puts me in might lead to physical changes in my brain and body, but to say, “well, it’s greedy to try to answer the question, we have English literature and what not!” is empty rhetoric, and rhetoric that won’t work on anyone paying attention.

    Another frustrating tendency Krakauer has is to hide behind names of other authors. When you are asked a question on a podcast, you have to answer it, not bring up 17 people who may or may not have had anything useful to say, without ever explaining what it is that they actually said. The name-dropping really came off as smoke and mirrors.

  4. Ms Levy — Your comment about name-dropping reminds me of a remark by Mark Solms (cofounder of neuropsychoanalysis) in a podcast – something along the lines of
    “If I were Robert Sapolsky, I’d not only give you some great answers to your question, I’d also refer to a dozen experts and tell you what THEY’d say.
    “I don’t have enough time for that, so I’ll just tell you what I think…”

  5. Thanks for reminding us of your Gifford Lectures! Diving into them now.
    One of my favorite (and most re-read) metaphysical inspirations, is Alfred North Whitehead’s GLs from a century ago, the series painfully re-edited into the 1979 corrected edition of “Process and Reality.”

    I must re-work my ideas of complexity, in light of Prof. Krakauer’s dialog with Sean! Whatever is most paradoxical, most surprising, and most broad in scope is (to my taste) exactly the right place to start in ‘scoping out’ any field. And that’s just what I see in Krakauer’s analysis, branching out specifically from his insistent and colorful rejection of “strong emergentism.”

    GREEDY is exactly the right word for the aggression of intellectual imperialists! – Perhaps to be supplemented by “arrogant,” “bigoted,” “infantile,” “pathetically condescending…”
    Rafael Nunez splendidly points to the intellectual crimes of Kronecker against Cantor, in “Where Mathematics Comes From.” We could easily add the crimes of Heisenberg/Bohr against Schroedinger and Einstein (as illuminated by Mara Beller’s merciless dissection of Bohr’s public bullying, in “Quantum Dialog”).

  6. (A little tongue in cheek here, but I think it should be mentioned.)

    Some galaxies think; therefore, they are!

    Complexity Theory
    Should (also) in some way bring greater clarity to the apparent ontological and epistemological disconnects:

    Das ding an sich, quantum uncertainty, incompleteness, More is Different ….

    The ineffable aspects of reality, i.e., beyond representational access, foreclosing on complete before the fact prediction, as manifest in some reduced model (digital or otherwise).

    The complexity discussion seems to focus on Biological Complexity, and emergent representational calculators, able to perform predictions better and better.

    Nevertheless, it seems a reasonable speculation, that biological complexity is in service to what appear to be fixed overarching physical principles, and the locally emergent manifestations of those principles.

    That said, thanks much for the discussion, really appreciate the erudition and effort. More is better!

  7. The discussion was very enjoyable and fascinating, even though I am certainly not a mathematician or theoretical physicist! However, as an Earth scientist I am equally allured by complexity and emergent patterns. I am also a fan of the concept of and kind of work done at SFI. As a suggestion for a future guest, Sean might invite Eric Smith, who was there a while ago, and who wrote a tome with Harold Morowitz on the origins of life on Earth. That would entail discussions for more than one podcast, I suppose!

Comments are closed.

Scroll to Top