242 | David Krakauer on Complexity, Agency, and Information

Complexity scientists have been able to make an impressive amount of progress despite the fact that there is not universal agreement about what "complexity" actually is. We know it when we see it, perhaps, but there are a number of aspects to the phenomenon, and different researchers will naturally focus on their favorites. Today's guest, David Krakauer, is president of the Santa Fe Institute and a longtime researcher in complexity. He points the finger at the concept of agency. A ball rolling down a hill just mindlessly obeys equations of motion, but a complex system gathers information and uses it to adapt. We talk about what that means and how to think about the current state of complexity science.

david krakauer

Support Mindscape on Patreon.

David Krakauer received his D.Phil. in evolutionary biology from Oxford University. He is currently President and William H. Miller Professor of Complex Systems at the Santa Fe Institute. Previously he was at the University of Wisconsin, Madison, where he was the founding director of the Wisconsin Institute for Discovery and the Co-director of the Center for Complexity and Collective Computation. He was included in Wired magazine's list of "50 People Who Will Change the World."

0:00:00.0 Sean Carroll: Hello everyone. Welcome to the Mindscape Podcast. I'm your host, Sean Carroll. Whenever we talk about complexity, which we do very often here at Mindscape, whenever we talk about complexity as a concept, there's a question that is being begged, which is, is there enough coherence and consistency and commonality between different manifestations of complex systems to legitimately talk about a field called complexity? I mean, there are things in the universe that are complex, but do they share enough ideas between them or features between them, that it makes sense to abstract from the individual things to talk about complexity as its own field of study? So today our guest is David Krakauer who is president of the Santa Fe Institute. Santa Fe is, as you probably know, is the world's leading research institute into complex systems. You'll not be surprised to say that when faced with this question, David says, yes, there is enough commonality between complex systems to study complexity.

0:01:01.8 SC: But after that, it was very interesting because I've known David for a while. I'm a part-time faculty at Santa Fe, but his definition of complexity and how he thinks about it was a little bit different than what I expected. Not to give away too much, but he really puts the ability of complex systems to in some sense reflect the world around them, to carry some information inside them about the rest of the world, and adapt to it as a central defining feature of complexity, which is fascinating to me because many discussions of complexity start with purely physical systems that don't do that. Hurricanes are supposed to be a paradigmatic complex systems, and David is very clear. He says, nope, I don't count that. And that has very interesting implications for all sorts of things down the line.

0:01:52.6 SC: So it's a wide ranging conversation with a lot of name dropping. David knows the history of this field in and out, so you'll hear a lot of names. I encourage you to Google them either while listening to the podcast or afterward. But the field does have a fascinating history, and thinking about the questions that our predecessors faced is often very helpful in thinking about the questions we face right now.

0:02:18.4 SC: David started as an evolutionary biologist, and he still does that, but he is someone who absolutely walks the walk in terms of being interdisciplinary in the best possible way. This is something that makes sense if you have this complexity lens to look at the world through. David has made very interesting contributions, not just to biology, but to social systems, to purely mathematical logical computational puzzles, to questions like, how do we think about COVID and other pandemics from a complex systems perspective?

0:02:52.5 SC: What is the origin of life? What is the nature of intelligence? How does artificial intelligence fit into all these things? Far more topics than we could possibly cover in just a single short podcast. But David is all sorts of presence in podcasts elsewhere. So you can follow him if you're not already familiar with his work. Let me throw in the occasional reminder that here at Mindscape you can become a Patreon supporter going to patreon.com/SeanMCarroll. Doing that both makes me feel good. It also makes you feel good, and you get the ability to listen to the podcast ad free and also ask questions at the monthly, Ask Me Anything episodes. So it's a good thing to do trying to build a community of people who can talk to each other over at Patreon. So look into that if you want to support Mindscape just a little bit. And with that, let's go.

[music]

0:03:51.3 SC: David Krakauer welcome to the Mindscape Podcast.

0:04:00.7 David Krakauer: Great to be with you.

0:04:02.1 SC: I guess the natural first question to ask here would be, What is complexity? I'm sure you've had that question before, but instead, let me ask, Is there such a thing as complexity? There are obviously complex things, but are those things that are complex, possessing enough traits in common that it's worth having a field called complex system studies?

0:04:27.2 DK: Yes, yes, yes. [laughter] And in a nutshell, we study teleonomic matter. We study matter with purpose, and that's what distinguishes it from physics. The way I like to say it, just briefly and we'll get into this, I hope, is if the origins of modern physics, or at least mathematical, natural science is a scientific revolution of the 17th century. The origins of complexity science are the industrial revolution, the era of design, the era of machines both made by humans, looms, clocks, steam engines, but also evolved. And all of the ideas that were embryonic in that period have been developed up through our own time.

0:05:23.2 SC: So somewhere between statistical mechanics and evolution and democracy you get complexity emerging out of that.

0:05:33.0 DK: Yeah. Well, interesting. I hadn't had the social dimension, but you easily could, for me the four legs of the table are statistical mechanics, entropy, evolution, control in other words... And that's sort of worth talking about and computation, and all of that emerges essentially in a period between 1840 and 1870. And I have in mind people like Boole and Babbage and Maxwell and Russel Wallace and Darwin and so forth. And all of them, all of those ideas that were there in embryo, what we're now flashing out.

0:06:23.4 SC: As you know I'm interested in the social dimension as well as other things. But that's yet for another podcast. It is interesting. Well, so I wanna get into the history, but given that I knew you were gonna say yes when I asked, is there something called complexity? Okay, now I can ask what is it, what are the features that we have in mind when we think about complex systems?

0:06:45.9 DK: Yeah, so the important point is to recognize that we need a fundamentally new set of ideas where the world we're studying is a world with endogenous ideas. We have to theorize about theorizers and that makes all the difference. And so notions of agency or reflexivity, these kinds of words we use to denote self-awareness or what does a mathematical theory look like when that's an unavoidable component of the theory. Feynman and Murray both made that point. Imagine how hard physics would be if particles could think. That is essentially the essence of complexity. And whether it's individual minds or collectives or societies, it doesn't really matter. And we'll get into why it doesn't matter, but for me at least, that's what complexity is. The study of teleonomic matter. That's the ontological domain. And of course that has implications for the methods we use. And we can use arithmetic but we can also use agent-based models, right? In other words, I'm not particularly restrictive in my ideas about epistemology, but there's no doubt that we need new epistemology for theorizers. I think that's quite clear.

0:08:12.9 SC: You mentioned Murray, that's of course Murray Gell-Mann who played a huge role in founding the Santa Fe Institute after a long career of kind of poo-pooing anyone who was not doing elementary particle physics. [laughter]

0:08:25.3 DK: Yeah. But you see there were... Yes, as you know, we both knew him well, there were two Murrays. There was the Murray of Physics, the phenomenologist, and then there was the Murray, hoarder, collector, taxonomist, natural historian of coins, ties and birds. And that's where his complexity interest started to emerge in his obsession with the profusion of diversity in those cultural and biological domains.

0:09:01.8 SC: It's interesting though, that you are emphasizing this teleological aspect of things. I would've thought that something like the Milky Way galaxy could be thought of as a complex system. Are you carving things out so that that doesn't count or are you just pointing at one of the most salient features of complexity is that there's this sort of reflectiveness the system we're studying is as complex as we are?

0:09:28.3 DK: No, I don't think it counts. I think it's not useful. There was in the early days at SFI this desire to distinguishing complex systems and complex adaptive systems. And I think that's just become sort of irrelevant. And in order for the field to stand on its own, I think we have to recognize that there is a shared very particular characteristic of all complex systems. And that is they internally encode the world in which they live. And whether that's a computer or a genome in a microbe, or neurons in a brain, that's the coherent common denominator, not self-organizing patterns that you might find for example, in a hurricane or a vortex or, those are very important elements, but they're not sufficient.

0:10:26.4 SC: Okay. That's actually very interesting. And I did not know that you would say that because self-organization obviously plays a huge role, is one of the first phrases that comes to mind when you ask many people about complexity. So is the stance that you just described heterodox within the field, or is this the emerging consensus?

0:10:47.2 DK: Well again, it's worth talking about the history because, where does the word come from in the first place? And I don't mean the etymology of the word complexity, which would be slightly tiresome, but it's use in the sense that we deploy it. And the original paper that was influential on us is the 1948 Warren Weaver paper, which is called Complexity in Science. And in that paper, Weaver, makes this interesting distinction between the sciences of simplicity, which is what you have studied most of your career, Sean. That is the physical world. The sciences of what he called disorganized complexity, statistical mechanics, and then organized complexity which is life. And he actually goes so far as to point out that he thinks that the appropriate methodologies there will be computation. It's quite prescient in '48 given that no one had a computer or unless you were a large government.

0:11:46.7 DK: So there's that, there's another paper written in '62 by Herb Simon, The Architecture of Complexity and that you might know a bit better. And in that paper, it's the system's view of complexity, quite unlike the Weaver view, which has to do with in some sense frozen accidents and this balance between order and disorder in this time and perspective, it's about systems of partially decomposable hierarchies of functional units. And that for him was complexity. So between Simon and Weaver, if you just add Kolmogorov '68, [chuckle] which was algorithmic complexity...

0:12:35.1 SC: What is that?

0:12:37.0 DK: Which is the two, the Simon and Weaver world is incompressible and requires algorithms with long description lengths. Then you get essentially in a nutshell, what we now think of as the complex domain.

0:12:49.5 SC: Okay. That's very, very helpful. The history is something that I need to learn more about. When I was preparing for this podcast, I did stumble across a map of the timeline of complex systems research. Are you familiar with this map?

0:13:04.7 DK: I am. I don't much like it, but yeah, I am familiar with it. Yeah. [laughter]

0:13:08.2 SC: It was helpful. There were a lot of names there 'cause when we were emailing back and forth, you listed some names about half of which I recognized, so I had to look up to see where the other names fit in. But the Weaver thing, let's dwell on that a bit 'cause it's one of my favorite talking points. The idea that if you have a system... I just think about a cup of coffee, cream and coffee mixing together and thinking about the entropy of it. If it's very very low entropy and organized it can't be complex. There's just not enough room to move around. And if it's very, very high entropy and disorganized it also can't be complex because it's already smeared out in equilibrium and complexity lives in between. Is that a point first made by Weaver in the '40s?

0:13:54.2 DK: Yeah, as far as I know I don't know if he was the first, but he certainly made that point very clearly. And of course it was taken up by the, sort of the Brussels school in Prigogine's work on dissipative systems that sort of as you say, these long-lived transients that seem to defy Boltzmann's intuitions about the second law. And I think it was Phil Anderson in '72 who told us why that wasn't complexity. And the point being that he wrote extensively on this topic actually, which was you need to somehow balance diversity, chaos structure with stability. And if you're gonna do something adaptive, computational, inferential or functional, you need a memory, you need to store information somehow. And that means you have to capture it from the environment and store it. And I think one of the limitations, as you say of these interesting transient structures that you observe as your cortado [laughter] cools down is that they're not very good at storing information. And they want to go the way they want to go. They have their own preferred structures and you'd have to establish very fancy boundary conditions to produce anything of really lasting interest. So Phil's point was somehow that has to be condensed into some kind of equilibrium structure. So as to store information reliably. And so yes but it's a, if you like it's a kind of fancy initial condition before you get to complexity.

0:15:32.1 SC: So I was going to ask about the word adaptive in the phrase complex adaptive systems because I have the feeling that in let's say the '90s, in the early days of SFI that was just part of the phrase complex adaptive systems. But it seems to have dropped out a little bit. And I was wondering whether or not there was less emphasis on that. But maybe what you're saying is that now we just take it for granted that of course we're studying adaptive systems.

0:15:58.2 DK: Yeah, I think it's the latter. I think it's the latter. I mean, one thing for us to chat about I think is really important. It's actually a paper I mentioned to you that I'm writing which is from action principles in physics to adaptation in complex systems through to agency. And I think that's quite a natural progression. And it solves this problem that has driven me absolutely batty over the years where people say, well, why is a ball not rolling down a hill adapting? And it's so frustrating [laughter] that we have to just put that to bed. And we all know right [chuckle] from physics about the relationship between differential equations of motion and extremizing some functional to derive the fixed points of our action. And this is the whole basis of the beauty of physics, the path of light and the path of objects through space and so on. From Maupertuis, Lagrange, Hamilton, Schwinger.

0:17:04.2 DK: Adaptation is not an action, [chuckle] and one of the things that happened in the '40s which is worth reflecting on is the landscape metaphor was introduced to the study of organized complexity as we call it in the Weaver language by Sewall Wright and Waddington, the adaptive landscape which I'm sure everyone who's listening knows. And this is his idea, instead of having a ball that rolls down a hill, you have a ball that rolls up one. [chuckle] And each point in that landscape defines a fitness and the maximum fitness is what natural selection takes you to. And it's rubbish and a wrong pitcher of a complex system because it's not a ball rolling on a landscape, it's a map that's being drawn of a landscape. And one of the characteristics of complex systems is if you open them up they have a memory, it's not a ball. You can read off from the internal states of the system where it's sitting. And that adaptation is if we think about it in terms of mutual information, for example it's the information extracted from the world. So is to allow an organism to navigate some configuration space. And this ball rolling on a hill metaphor which came from physics, has been very unfortunate because it's led to this confusion between action and adaptation.

0:18:37.0 SC: But I kind of like the fitness landscape. I never thought of it as a ball rolling on it because obviously whatever was wandering through the landscape gets to the top of the hill and stops there. Am I allowed to still have the landscape even though I get rid of the balls?

0:18:48.0 DK: Yeah, you should have the landscape but a better version of it would be shown with a map rather than a ball. And as you are navigating through the landscape, you are improving your map, that's what adaptation is. In every case that we've ever studied that information is acquired and stored, hence Murray's obsession and John Hollands with schema which were these internal encodings of the world in which an adaptive agent lives, there's no avoiding it. And I think that the abstraction into this variational picture has cost us quite a bit actually [laughter] and led to a lot of confusion.

0:19:29.5 SC: I would've guessed that somewhere in the first few minutes of this discussion we would've had words like power laws and networks and hierarchies. You haven't quite used those terms yet?

0:19:42.0 DK: Well, I don't use them very much. [laughter] It's their... Well, okay, so there's a lot to say here. Let's just be clear that we... And see whether you agree. There is a domain which I think we have to accept which is this domain of agents that either evolved or we made them in one way or another. And they're hard to theorize about because they theorize. And it turns out that they have some characteristic architectural features. Hence the value of networks for studying them. So neurons in the brain, [chuckle] are connected in large networks and as are individuals in societies who trade in markets and so forth.

0:20:40.3 DK: So networks feel like a quite natural way of capturing higher order correlations in the domain of complex systems, but they're just one mathematical method. And there are many others. And power laws are all over physics, [chuckle] they're also all over complex systems. They're all over the universe. [laughter] So there is this fetish, I think sometimes with methodologies. And there's nothing wrong with that, because, we all love mathematics and it's powerful, but we shouldn't somehow confuse the map with the territory. We shouldn't belabour our tools to the exclusion of the richness of the phenomenology. Because of course, as the science develops, Sean, will develop new tools.

0:21:33.1 SC: Of course. Yes.

0:21:34.5 DK: And so I don't... There is a tendency, I think, in the complex systems world to list things.

0:21:40.1 SC: Oh yes.

0:21:42.4 DK: And these are networks and they're [laughter] airdrops and they're noisy and they're distributed. That's true [laughter] But there are many things in the universe that are, and I feel that as long as all of that is in the service of trying to understand these complicated little computational elements that we're theorizing about, and that for me would be my priority rather than the methodologies.

0:22:13.0 SC: That's completely fair. Let me back up a little bit to ask a pre-question here, which is, would you say that complexity science is pre-paradigmatic in Thomas Kuhn's sense? Like we're more like Galileo than Newton. We have not agreed on a central set of results and methods, an example.

0:22:32.5 DK: Well, I don't know. That's an interesting question. I think that I don't think that's true. I think that going back to our history, let's just talk about that because I think it does establish the paradigm. When the steam engine was built we got very concerned about its efficiency. Hence thermodynamics.

0:23:02.2 SC: That's a story I know. Yeah.

0:23:04.0 DK: Right. But not just thermodynamics, because Watt had stolen from Huygens, the centrifugal governor and installed it on the steam engine to regulate the speed of the engine or its consumption of energy, sources of energy. And Maxwell invented control theory in his famous 1868 paper on regulators and governors, which is the original beautiful treatment of the stability of the integral controller and the instability of the differential controller. So we have control theory and we have thermodynamics and statistical mechanics. Okay. Carnot, Clausius, Boltzmann, Maxwell. At the same time you have Russel and Darwin developing the theory of natural selection and what do they use to illustrate the principle? Watt's governor? There's a line at the end of their paper where they say natural selection and I think verbatim they say, "Is exactly equivalent to a governor, which is, okay, so this is surprising.

0:24:17.6 DK: He drops that Darwin, by the way, in the Origin of Species, this is the presentation to the Linnean Society. At the same time you have Boole writing The Laws of Thought, right? Developing what we now think of as Boolean logic, it's not quite what we now think of Boolean logic. And you have Babbage, essentially conceiving of the difference in analytical engines that the calculator and the computer, all of them are in correspondence, right. So they all know each other. So it's that conjunction of ideas that pertain to purposeful machines is the paradigm. And so it comes out of engineering as I see it at least. Now a hundred years later, [chuckle] those ideas are really consolidated. And I mean by people like Wiener, right? Wiener who discovers Maxwell. Maxwell's paper was considered too complicated.

0:25:23.0 DK: It came out of his early work, by the way, on the stability of Saturn's rings. So he was interested in these general issues. But that paper is quite complicated, but beautiful. You've got Shannon, you've got Turing, I mean, all of them now really doubling down on what we mean by computation, what we mean by information, what we mean by feedback, control, and the development of the mathematical theory of evolution by Wright, Fisher, Haldane and so forth. So I do consider it a paradigm. What's odd about it actually, Sean, is it wasn't really until the late '60s and '70s that the four legs were recognised as being part of one table. And that was people like Phil Anderson to my mind who really pointed that out. So yes, it's a paradigm.

0:26:15.6 SC: Okay. So I mean, the reason why I'm asking is because this distinction that you're drawing, the putting of... What is a single word description for what we're putting central here that describes carrying around a little model of the world in you?

0:26:42.2 DK: I call it teleonomic matter.

0:26:42.4 SC: Teleonomic, that I couldn't carry on a little model of the world without having a goal. Right?

0:26:43.0 DK: You could have a record or a simple archive of the world, but I don't think so. To be purposeful in complex reality, but okay. But...

0:26:54.9 SC: Okay. Anyway, but I get it. I guess, so you're carving that out and maybe the physicist in me wants to say, yeah, but all this stuff about power laws appearing everywhere and preferential attachment and galaxies and hurricanes and their relationship to things like the second law and chaos theory and things like that is still all really important and should count as complexity. And maybe this is just two sub domains within complexity. That's why I'm asking if it's...

0:27:23.2 DK: Yes.

0:27:24.6 SC: Pre-paradigmatic or not. I grew up as a physicist. You grew up as a biologist, and we're still, our history's met.

0:27:31.5 DK: I would say it's interesting. I think that it may be Thomas Kuhn was a little bit too simple minded.

[laughter]

0:27:42.8 DK: Maybe we don't really have paradigms replacing paradigms, but nested paradigms because I agree with you. I think just look at the one beautiful example that I know we're both interested in, which is the development of the idea of the second law as a statistical law that can be locally violated. And Maxwell's demon, you know, through Lord Kelvin describing it as an intelligent demon by the way, which is an articulation of natural selection, through to Lord's paper in the 1920s saying it's all okay through Landauer, through Bennett. And I think if you look at that history of the development of statistical mechanics to being a kind of statistical mechanics of computation, at least in the way that Landauer and Bennett describe it, right? Erasing bits and all that, then I think you're right. I think there are really interesting bridges between modern statistical mechanics and complexity science. 'Cause they already have that sort of weird purposeful character by virtue of the demon. So I don't deny that, but I just want you to clarify this point that I don't consider having an action principle [laughter] equivalent to...

0:29:05.0 SC: [0:29:05.1] ____.

0:29:06.1 DK: Having an agent.

0:29:07.6 SC: Good. I mean, just to back up for the audience members who don't fall asleep thinking about these things, 'cause I am writing about them in the books that I'm writing right now. You know, we have the laws of physics, which in the Newtonian or Laplacian way of putting things, say you give me the state at one moment in time, I can chug forward in time using equations. But we also have these principles that say if I consider all the possible histories and future paths that a system could take, the ones that they physically do take minimize some quantity. And that sounds very global and very different, but in fact it's exactly mathematically equivalent as you're saying. And that I guess neither one of these moves really works for a complex system as long as we have incomplete information. Right?

0:29:53.3 DK: Yeah, that's right. I mean, no one has really succeeded in writing down an action functional, you can try and there are efforts if you think about different formalization of what's called the action perception loop, the most recent popular version being the free energy principle, they do have that kind of variational character, even some of my work, Sean, individuality maximizes something, but it's not clear that it's the stationary solution to something. So in a way for me the reason why they really fundamentally should not be considered equivalent is because one of them is essentially leveraging ideas of conservation and symmetry.

0:30:42.3 SC: Exactly. Right.

0:30:42.6 DK: Right. Whereas the world that we live, you know, the complex world we live in is all about broken symmetry and frozen accidents. And because of that, at best it would be a sort of weak metaphor.

0:30:52.9 SC: I found it very frustrating to kind of go to the internet and look up principles of maximum entropy and minimum entropy. 'Cause there's all sorts of principles that say sometimes entropy is maximized, sometimes it's minimized. We're not exactly sure when, I'm not quite sure what the usefulness of these ideas are, but I still feel there's probably something there. I just haven't put my finger on it yet.

0:31:14.0 DK: Yeah, I completely agree. And I think that it's like everything, right, that you have to operationalize the concept.

[laughter]

0:31:23.3 DK: And once you've done that, you are tethered to reality in a way that it's useful. I think if you talk in these very generic terms, I think many people do. I'm not exactly sure either what is being said.

0:31:36.2 SC: Well, I'm really glad that you mentioned Maxwell's demon though, because that is another one of my favorite talking points. And I'm not sure if the listeners know how it took many, many, many decades to come to some viewpoint on what the right explanation for, I'm gonna assume people know what Maxwell's demon is. As you see a little demon that can seemingly violate the second law by separating hot atoms and cold atoms. It took us a long while to figure out how maybe that's not really violating the second law. And even now people don't agree. I'm not quite sure that we're completely done, but one way or another, I wouldn't call it violating the second law, but it is a paradigm for many kinds of complex systems, right? Something that uses free energy to keep things more organized than they otherwise would. Do you think that's fair?

0:32:28.2 DK: Oh, absolutely. I do. And as you know, this is something I've worked on quite a bit and I, just to extend your explanation, what's quite clear is that if you think about Maxwell's demons as a mechanism of sorting, as you said, between hot and cold particles. If you turn that little thought experiment on its head, you can think about natural selection as a demon that sorts between alternative variants in an environment. And what is added to the complication of this story in complex systems is the origin of the demon itself.

0:33:12.0 SC: Exactly. Good.

0:33:13.6 DK: See, when, right when Maxwell and Lord Kelvin were thinking about it, it was a sort of Gedanken it was like, okay, here's how you can violate this statistical law. And then it turns out, well, [0:33:23.7] ____ like no [laughter], the demon is actually dissipating heat too, and so on and so forth. But in the biological domain, if you're gonna play with demons, which I do and as many people do, and they get called different things, then you have to account for their origin. And natural selection is a very interesting collective demon. I mean, certainly there's a physical dimension to natural selection. If you're a bird and you are flying, it's not another bird that's keeping you in the air. But for most of us, and for most of the structure that we care about, it came about through competition with other living things, demons. So we're all mutual demons to each other. And developing that theory of nested hierarchical demons has been of an interest of mine. And it turns out to be difficult. Not unlike the difficulty incidentally of your coffee metaphor, because now let me see. I'm gonna maybe a little bit more technical now and you tell me, Sean, whether this...

0:34:26.8 SC: I will tell you. Please be technical first and we'll fix it later.

0:34:29.7 DK: Yeah. Which is the following. Right. Which, if you imagine a string, a binary string, let's call each of those bits an information bearing degree of freedom. It does something. The first bit tells you to go left the second bit tells you which door to open, that kind of thing. And we know because of the second law that if you lead that string alone for long enough, it'll just be all shuffled. It's randomized, thermalized. So think about a genome, it's like a string. And each of the bits is an information bearing degree of freedom, it encodes an amino acid of a protein or binding site or, okay. So for every bit that is transmitted over many generations reliably, something has to inspect it and say, stay that way. That's natural selection. Right. But it's not one demon, it's a tree, it's an ant.

0:35:25.9 DK: It's the amount of water in your environment. It's an incredibly complicated composite of forces inspecting each bit. And turns out you can prove this, that the complication or the description length of the target sequence cannot be greater than the description length of natural selection itself. 'Cause natural selection has to inspect each bit. Unless you can compress it. If there's redundancies, you can. And so you get this really weird result, which is that, organismal complexity, age and complexity's upper bounded by the complexity of natural selection itself. And that's this problem of the origin of the demon.

0:36:14.5 SC: Well, I...

0:36:15.8 DK: Because you have to build this damn thing.

0:36:17.5 SC: Right. But you're gonna need to explain to me what we mean by the complexity of natural selection itself. I mean the idea of natural selection can be written in haiku form, right? I mean, you must mean the, are you including the whole environment in which the selection is happening?

0:36:32.9 DK: Yes.

0:36:33.2 SC: Okay.

0:36:33.6 DK: Well, that is natural selection, right, Sean? That's one of the slightly misleading things about using the word natural selection because it's basically every force in the universe that intrudes upon each bit. That's information bearing. So it's a very misleading idea. And in fact, that's partly one of the limitations of the mathematics. A little bit like the limitation of the fitness landscape. 'Cause you can just write down this thing and call it F sub I [laughter] No problem...

0:37:00.8 SC: It looks so simple.

0:37:01.9 DK: But what is that thing? And that thing is everything. So another way to make this clearer is if I gave you a random string and I said to you, Sean, I would like you to flip the bits to achieve this target string. And each time that was also random, you'd have to go through that sequence bit by bit, flipping accordingly. Your natural selection. I'm the organism. So you can see there's this kind of interesting problem, hence the development of ideas like ecosystem engineering and niche construction, which are efforts partial beyond belief to build natural selection itself. The origin of selection, not the origin of species.

0:37:50.4 SC: So somewhere on my computer, I have a list of my future research projects. I don't know about you, but I always, I conceptualize my future research goals in terms of the titles of papers that I will someday try to write.

0:38:02.7 DK: Oh yeah [laughter]

0:38:03.2 SC: And one of them is, some of them are very well fleshed out and some of them are completely speculative. In the latter category we have how information comes to life. I want to know, I think this is exactly the question you're asking. What is the first moment in the evolution of the universe and the life within it where one part of the universe was using information about another part for some purpose? Is that something, do we know the answer to that one already?

0:38:31.7 DK: No.

0:38:32.2 SC: Okay.

0:38:32.5 DK: No, that's really interesting. It's odd, Chris Kempes and I just wrote a paper called Life is Problem Solving Matter and you know, this is a hole, we can go down this path. But I think the way that you are thinking about it is correct because the way we typically think about origin of life, which is what you're talking about is the origin of certain kinds of chemistry which are correlated with life, [laughter] right? And so we often confound to the chemistry with life itself, but life is doing the thing you are describing, which is some weird inferential representational thing. And when that first happened, I think genuinely mysterious. I do think we've been a little bit misled by an obsession with organic chemistry. And one thing to point out that helps us is I think we've built life so many times as non-chemical digital life. I think that if you write a little code on your computer, Sean, on it, it could be very simple form of life. But I think it qualifies and life is this weird thing just to use a physics concept, which things I... Two things I kind of work on life and intelligence. Life I consider intensive, whereas intelligence, I consider extensive. You are not more alive if you have a hundred cells in one [laughter] right?

0:40:00.8 SC: Right. Yes.

0:40:01.5 DK: An elephant is not more alive than a flea, that would be kind of silly. But an elephant might be more intelligent than a flea. And there's this very interesting connection between those two concepts. I don't think that you can treat them independently. I think once you develop life, you've developed intelligence and vice versa. And working out that difference is complicated.

0:40:24.8 SC: So I always think about these things in terms of entropy and the second law and coarse-graining and things like that. The centrality of information to everything that you've talked about is very clear. And to me, the Big Bang is the ultimate information resource in some sense. It was very, very low entropy, which is another way of saying that we have a lot of information about exactly what the state was and everything ever since then is just exploring the space of possibilities. And is it... The evolution of complex systems, I think about these simple inorganic ones. You want to think about the evolution of teleonomic agents. Is there a general understanding of why they come to be at all in that general working out of the second law?

0:41:17.7 DK: No. What there is, is a sort of total logical understanding that once a replicator comes into existence that can error correct. It will stay in existence. You know what I mean?

0:41:32.6 SC: Yeah.

0:41:33.4 DK: But that's not necessarily a sophisticated encoding of the world in which it lives. And we can... It's the sort of large language model paperclip nightmare. It's that sort of thing. You can build lots and lots of simple things and it's quite straightforward. But this move towards encoding something else that's encoding, there are stories. [laughter] Well, look, it's competitive and if I can out encode you, then... But there are stories and I don't like genuinely don't know of any theory. I know of models, but they're a little bit too fine tuned for my tastes. But I don't think there is a theory that says that more and more sophisticated teleonomic agents should come into existence. I don't think there is such a theory.

0:42:32.0 SC: Well, part of it is and I talked a little bit with Michael Lachmann at SFI about this question. To me, one way of stating the puzzle is at the level of just statistical mechanics and thermo and entropy, there is no future boundary condition in the universe. The universe does whatever it wants to do. There's a past boundary condition, the low entropy, past hypothesis, Big Bang, et cetera. But purposeful agents can be thought of maybe, I don't know, tell me if I'm wrong, as carrying a little mini future boundary condition with them. There is a state they want to be at in the future, and it almost doesn't matter how they get there. Like, if I want to go to the store, maybe I take the car or I walk, or I take this path or whatever. The point is that future state I want to be in. So how in the world does the big looming past boundary condition of low entropy get flipped around to little mini boundary conditions in agents that have purposes?

0:43:34.8 DK: Yeah. No, I think it's a deep question. I genuinely don't think we have good answers to it. I think that the... That to me is... Going back to our earlier point about action and adaptation to agents, I think agents introduce the concept of the policy like a Markov policy, meaning a procedure or a root which is what you are talking about. And I think that if you think about chemotaxis, a bacterium that navigates up some nutrient gradient to a target, it's this differential controller. Saying instantaneously the scale of field tells me that I'm in the right place and I'm gonna wiggle about a bit and stop once I get to a higher concentration and so forth. But that's not what we do. We say, I wanna go to the shop and buy some orange juice. And there is no gradient. There is no information in some weird orange juice scale of feel. [laughter] Telling me I've got closer to Whole Foods.

0:44:31.9 SC: No, sadly.

0:44:33.0 DK: So I have a map. And that transition from adaptation, simple adaptation to agency following a policy is very intriguing. No doubt we could build models where you, in some sense accumulate bits of information that allow you to encode a path. But, is there a theory for that transition? No.

0:44:54.9 SC: Okay. [laughter] I think to ask more intelligent questions, we should get some stuff on the table about emergence because...

0:45:02.9 DK: Yeah.

0:45:03.3 SC: You've talked about it a lot. I've talked about it a lot. The word is fraught. So rather than trying to define it, let me ask how you think about the idea of emergence.

0:45:14.2 DK: Yeah. And I'm very indebted to Phil Anderson here and Bob Laughlin. Yeah, it's one of those terms that [laughter] for some reason attracts a lot of bullshitty commentary, and let me start simple and get more complicated.

0:45:37.7 SC: Perfect.

0:45:38.8 DK: I think the simplest place to start is where Phil begins with symmetry breaking. And that is the underlying fundamental laws of physics of symmetric. And so if you're trying to explain why one particular state is picked rather than another, where both would be equally probable under the laws, you have to invoke this idea that asymmetry is broken. Either it's driven or it's endogenously found by thermo stochastic fluctuation. And then there is some energy barrier that keeps you in that state. And the canonical examples always given are the chirality of molecules. That whether they're left-handed or right-handed and amino acids are L-chiral, they're left-handed and sugars are D-chiral, they're right-handed and they always are. [chuckle] And there's no law of physics that tells you that should be true 'cause they have enantiomers, and so you should find as many L as D, you don't. Now, so that's the first point. And of course, as molecules get bigger, the tunneling barriers get deeper. And so these broken symmetries accumulate, and that's what Murray like to call frozen accidents. And the complex world is full of them and it's built up from them.

0:46:57.3 DK: So the first, I think condition for emergence is broken symmetry because it already tells you that if you want to understand the observable you can't use the physics, doesn't tell you. And it's consistent with it. It obeys the physics, but it's not dictated by the physics to use his language, which I think is a very, very important distinction. Obey versus dictate. [laughter] And then of course as you move up, it's natural selections that's breaking all the symmetries. And so it's obeying physics, but dictated by selection by these weird demons. So, okay, point one. Point two, it turns out that in these hierarchy of frozen accidents turns out that you can write down effective theories, not fundamental theories, effective theories for the observables of interest, for the effective variables that do as well as understanding all the microscopic constituents. And for example, you can write down a fluid dynamical equation as opposed to a very high dimensional description of all the particles motions.

0:48:10.3 SC: Right.

0:48:11.7 DK: And for me, broken symmetry is the physical precondition for the possibility of writing down effective theories. And if that effective theory is dynamically sufficient, that is, you don't gain information by going down, even though it's clearly obeying those laws. That is what we mean by emergence. And it's not very complicated. [laughter] It's in the physical world and it's in the complex world. And what's fascinating is that teleonomic matter mobilizes emergent levels to understand itself. So we have a concept of ourselves that we mobilize our minds to understand it. It obeys brain dynamics. But I have no idea what my neurons are doing and neither do I care. And that extends up through the disciplines.

0:49:04.8 DK: And I like to give the example of course of mathematics that the proof of the correctness of a theorem, like Andrew Wiles's proof of the Fermat conjecture, does not depend on how much endorphin he's generating. It's expressed entirely in terms of mathematics itself. And when you can do that, why you are allowed to write down an effective theory is why emergence is interesting. 'Cause you can't always do it.

0:49:30.6 SC: Sure.

0:49:31.5 DK: Right, because the range of parametric variation under which that theory applies can be very limiting. And that's why I think it's an interesting scientific problem as opposed to just an inevitable one.

0:49:44.3 SC: So that version of emergence, I think maps onto the classic distinction of weak versus strong emergence as weak emergence. You're talking about coarse-graining, a system that could very well be described at a microscopic level, but you don't need to and there's no point in doing it, right?

0:50:01.8 DK: Yes.

0:50:02.7 SC: There are people out there, maybe you know some of them [laughter] who think that's not enough, who really think that we're gonna need new laws of physics purely at the macro levels that then influence via downward causation what's happening at the micro level. Are you against that or do you just not need it?

0:50:24.5 DK: I'm against it and I'll explain why I'm against it, for the reasons of Phil was, which is it's greedy. Those new laws of physics are called English literature, [laughter] or musical composition or metaphysics or carpentry. And it's not a new law of physics, it's a new theory. And it might have laws in it, I'm not sure, may have rules in it, might be a little bit more modest. But they're not physics anymore. I think physics ended once we moved into the domain of excessive complication. Still there. It's never going away. Thank God, thank Newton, whatever. But it's not very useful. And this is what I mean by confusion. It's actually not a complicated thing, Sean. We can write down these effective theories. We would like to know when we can. Presumably there's some temperature range where they apply. At a certain point my mind isn't gonna work 'cause the proteins are gonna denature.

0:51:31.9 SC: Of course. Yeah.

0:51:33.2 DK: Then I need to know about proteins. So I have to be a reductionist in that sense. But so fine, it's a science of pluralism. It tells you why we need to have Schoenberg and Jimi Hendrix and not just Newton and Leibniz. So I find that reassuring in all sorts of ways. Now in terms of downward causation, and I think Jessica's written about this quite well, and which I quite like, Jessica Flack, when she talks about the parts reading off the states of the whole and that I think resolves elements of the paradox, which is, I can read one of your books and be influenced by it. And all my mind is reading it as are my neurons. [chuckle] And there's no mechanical mystery in that anymore. And that's downward causation without mystery.

0:52:35.8 SC: So what I'm thinking about, one of the reasons why I brought up this question is, we live in a world where the space of possible arrangements of things is very, very large. The combinatorial set of possibilities is beyond our comprehension. And we live in a very specific place in it. There's specific animals, specific organ, environments and so forth. There are those who would say that in order to account for why this particular place we live in right now is where we are, we can't just rely on microscopic physics plus some random numbers. We need some principles or something to stretch out over it. I'm not doing this position justice 'cause I don't have the slightest belief in it, but I'm not making it up either. There are people like this.

0:53:26.2 DK: No, I know. And I think that the legitimate part of their obscurantism is that we don't really understand as we've already dis... We don't even understand about the origin of life as we pointed out earlier. So there are genuine problems out there that need to be resolved, we shouldn't pretend we have, but we don't wanna fill it with moonshine. I do think though that we may need a much more sophisticated theory of memory and of history, and so when you talk about fluctuations and so forth, that's the whole point about evolution. That it incrementally builds up a more and more refined encoding of reality, it builds up a memory. And that's by the way, the connection to the IGUS and decoherent histories and all that. I mean, there's this... We essentially encode coarse-grained representations of particular trajectories in evolutionary history.

0:54:27.8 SC: You should explain what an IGUS is.

0:54:30.3 DK: Oh, well, this is a little bit of a tribute to our colleague and friend Jim Hartle who just passed away. In The Quark and the Jaguar, Murray Gell-Mann presents a complex system in lines that I share in terms of schema. These little entities that encode histories that they use to behave and to predict. He doesn't go much beyond that actually in that book. We can talk... And John Holland talked about this, and in fact the first schema theorem was presented by Immanuel Kant in the Critique of Pure Reason. It's a whole chapter called the Schema, where Kant was trying to understand how you turned continuous sensation into propositions. Such fascinating actually. Another prehistoric contribution to complexity. And Murray then said, "Let's call the schema an IGUS." And this is an abbreviation, an information gathering utilizing system.

0:55:35.5 DK: And I think part Murray... And I'd be curious to hear what you think about this Sean. 'Cause I think part of this was motivated by his feeling very disgruntled with Copenhagen and the role of the observer and all that weirdness around the consciousness versus just a detector. And then Jim Hartle in really delightful ways, I think extended the idea in two papers, which I really enjoyed. One of them was why is the universe comprehensible, getting at Einstein's question? And the second one was what he called The Physics of 'Now', which was Why is there a present, past and a future? These are actually evolutionary sequelae. They are not part of physics, they're a part of complex systems. And he used the IGUS. He put in IGUS in Minkowski space and said, This is how it would operate, and derived these three concepts from that. But anyway, that's just on Jim and the IGUS.

0:56:34.3 SC: Well, this is always dangerous to sort of speak extemporaneously in the middle of a podcast, but maybe I can see that there is a link between the questions that I care about, about increasing entropy and how the journey from low entropy to equilibrium can look complex to the points you raise about agency and information and teleonomics, because it is probably just a generic feature of interacting subsystems along the journey from low entropy to high entropy that they make an impression on each other. If I walk down the beach, I leave footprints on it. Now, the beach doesn't use those footprints to do anything, but it might not be that much of a leap to see how if both of the interacting systems have enough complexity, they could... They're more likely to persist if they can put that information to use.

0:57:31.8 DK: Yeah, I mean you can... I've worked on these models. It's not difficult to do if you take the kind of model that you worked on, which generate non-trivial transient patterns, and you add to that Gause's principle, the so-called exclusion principle. Which is that if the local order is maintained by some energy gradient, and I'm allowed to exclude you from that position and space so as to gain access to more of it. That does increase the frequency of these pattern states. So Darwin called the competition, Gause called it the exclusion principle. So you can add just a few tweaks to what would otherwise be a fairly simple dynamical system, and produced very long-term states of order. And now why they then ratchet up is somewhat unknown.

0:58:34.6 SC: Well, I guess this relates to what I was gonna ask next, which is about the... You already alluded to the fact that you don't always have an emergent theory lying around. When you have some collection of stuff, you may or may not be lucky enough to have a simple way of describing it just in terms of macroscopically observable features. But when you do well... So number one, what do we know about when you do, like how generic are emergent descriptions? Are we very fortunate to have them at all? And number two, I know that you've wrotten, written and spoken on the role of noise in maintaining things like that, which is just fascinating to me. I know that dissipation and friction are everywhere in the macroscopic world, but you're giving the sales pitch for us, making them positive contributions to our persistence rather than merely annoyances.

0:59:28.3 DK: Yeah. That was that... Yeah. That's... We lots of... Well, I won't mention my adversaries in that debate but yeah...

0:59:38.0 SC: Tell the story. Tell the story. You can tell it.

0:59:40.0 DK: No, but this was a debate that David Wolpert and I had with Danny Kahneman and Cass Sunstein, and they'd written this book called Noise and how terrible it is, I mean 'cause Danny's written of course at length about bias.

1:00:00.4 SC: Sure.

1:00:02.4 DK: And how we should correct it, and this sort of the sequel was Noise, and how we should correct it. And coming from evolutionary theory where this sort of sine qua non for the evolution of complex life is mutation. [laughter] Noise, in other words it seemed a little bit unfortunate, and then of course the more you look from stochastic resonance, stochastic amplification you know equilibrium selection in games, noise is absolutely one of the most valuable characteristics of complex systems. And if you want you can reduce it to one statement which is exploration which is that if you want to explore a space noise is very handy. But once you want to exploit a solution you want to kind of turn it down a bit. And so it's this dialectic right between high and low temperature that is characteristic of the complex domain. So it's both constructive and destructive but you want to be able to control it.

1:01:01.8 SC: And does it play a role in when we have an immersion description?

1:01:06.4 DK: Well, I think it must trade because presumably the origin of those new levels presumably are discovered through some random walk of one kind or another. And it's also the case, right? That mutation is macroscopic noise, I mean by physics standards, [laughter] by a biological standard it's microscopic noise. But this is not the noise in an atom, right? This is noise in a macromolecule. And so so it's quite interesting how nature has built noise in at levels above what we'd normally think of as thermal noise. And this is not the noise induced by a slight increase in temperature even though there are these [1:02:00.2] ____ arena effects. These are actually built they're constructed dice that we've loaded complex systems with in order to generate variability.

1:02:13.7 SC: In this whole discussion of emergence not just today but in in my life, there's always this issue that concepts we kind of take for granted are now things that we're trying to explain and account for like agency or purpose or whatever. I just recalled one that you've written about that I would love to hear more about which is the existence of individuals. Like how do we get to carve up systems into this thing is a coherent hole, and that one as a separate thing. You're not gonna rely on some fundamental essence you're gonna say that's an emergent phenomenon.

1:02:49.8 DK: Yeah. This interest in individuality comes from two different directions. One is when I was at Oxford having to listen to Richard Dawkins insist that the only unit of selection was a gene. And his argument being that it's the only temporally coherent structure, it's the thing that survives recombination intact. So when you shuffle the deck of cards it's not the hand that's preserved it's the cards, so that's the gene. That was his view, and that seemed to me too simple-minded. And then on the other hand this desire that we all have to find the atomic building blocks of our field of our domain. [chuckle] So it could be a quack it could be an atom, right? It could be a molecule, it could be a cell. And if you're interested in the evolution of complexity at least to me, one of them should be this weird thing that we all take for granted which is that the complex domain comes in these packages that we call individuals, agents or organisms. It's hard not to find them.

[chuckle]

1:04:00.6 SC: [1:04:00.7] ____ Take it for granted.

1:04:00.9 DK: What's going on? Are we just being misled? Is it nominal? Is it a perceptual artifact? And all of that might be true, right? That might be true. So with some colleagues we started developing a information theoretic formalism to hunt for them. Could we develop if you like lenses like telescopes that work in different electromagnetic frequencies that would detect different kinds of individuals. A, where the operational definition is something that can propagate adaptive information forward in time. An adaptive world line. That's what we are looking for. And the answer is yes we could, we developed this theory and the... And you discover this kind of zoo of different kinds of agentic atom, individual the one that we all know best the organism which is defined largely in terms of its own lineage, right?

1:05:08.8 DK: So to understand Sean, I should meet Sean's parents and to understand them their parents not being excessively Freudian about it but simply [1:05:15.9] ____ genotypically a lot of it comes from them and the environment but a lot from them. Genotypically nearly all of it actually, all of it. Not epigenetically but genetically. And so you are this somewhat autonomous thing that is largely responsible for propagating your information forward in time. If you look at things like social insects, well, there the genetic information is shared across different physical units, right? So ants and bees it might be that the queen bee propagates the genome, the workers help her. There there's a different conception of individuality and it's a collective one. And so what these information theoretic devices do is they find them. They say, "Ah, that's the right level of aggregation at which information is in some sense being propagated forward."

1:06:16.2 DK: That's a coarse graining which is sufficient. So now having discovered them let's go back to Richard Dawkins. What you realize right is that it's not true that it's the most minimal building block that is reliably transmitted forward in time. It turns out it's kind of periodic. The minimal things are then the things a bit more inclusive or not then the things that are more inclusive are, right? You see what I mean? And so a little bit like a society. A society probably propagates forward culture quite reliably if that's what you were measuring, but bits of it do not and individuals might. And so for us it was just a much more grounded attempt to discover the causal units of complex systems. I guess that's how I would say it.

1:07:17.7 SC: So under some circumstances is it right to think of the ant colony as the individual rather than the individual ants?

1:07:26.7 DK: Exactly. Absolutely right. You know I mean a really good collaboration right? So like Jim, Murray on their quantum cosmology, they are the individual, and cutting it in half you would lose it. And I think that's one of the things that Herb Simon was trying to talk about in his two papers the Architecture of Complexity and the Organization of Complexity published in the '70s which was you have these loosely bound things and these tightly bound aggregates and you move between them. And I think that's one of the really fascinating characteristics of complex systems that you form these quite tightly bound aggregates that look physically loosely bound.

1:08:11.1 SC: Has SFI press ever published an anthology of the most important central papers in the history of complex systems?

1:08:18.2 DK: Well, that must be... I am publishing it now. [laughter]

1:08:21.0 SC: Very good.

1:08:22.8 DK: You could, you know this. So we are publishing on the 40th anniversary of SFI which is next year, Foundations of Complexity science.

1:08:30.6 SC: Lovely.

1:08:32.2 DK: And these are papers the first two papers are Lotka and Szilard 1920, 1920s through to 2000. And so I don't do the pre-history, do the history. And it's an extraordinary coherent...

[laughter]

1:08:53.8 DK: Project. One of the things that we always laugh about at SFI when we have meetings and we're trying to hire people or whatever or decide we will think oh my God none of us agree on what complexity is. It's just going to fission. [laughter] But what's weird is you know, right, as you sit down you think well, very quickly it's about adaptation, it's about computation, it's about energy. All of these essential elements come into play very fast. And what we don't have and maybe never will is the kind of unified theory of all of those things that will justify it. And I do genuinely believe that the information energy integration is on the horizon. The evolution thing I don't know. So yes so this is coming out and part of my interest in this history is reading all of those papers, [laughter] and seeing how how they rhyme.

1:09:58.7 SC: Yeah. Yeah. I mean that's why I suspect that it is pre-paradigmatic. I mean the thing about having a paradigm is that maybe you're not sure until you're past. Because we do seem to be the royal we certainly on the verge of understanding some things. Complexity science is taken more seriously as a field than it was 40 years ago when SFI was founded. And not at every department at every university but I think that it's not a novelty and we're really tying some threads together, but maybe I'm being overly optimistic.

1:10:36.7 DK: No, I think you're right. And one thing we haven't talked about much, I think you're right, is the social science dimension. Because the founding of SFI, one of its peculiar characteristics and that's why I sometimes describe it as a midpoint between the Bauhaus and Bell Labs, [chuckle] is that we had social science as much in evidence as natural science. We didn't make that distinction. And that's because coming out of the Austrian school, Schumpeter, this interest in information, computation, aggregation and then of course the early work of Ken Arrow, on learning and expertise and then Brian Arthur's obvious work building on positive returns. They were all talking about the same kinds of things as the natural scientists, computer scientists, Stuart Kauffman, John Holland, Murray and others. All interested in schema like phenomena I guess is one way to put it all interested in agents and their collectors.

1:11:39.3 DK: And so that put us in a really interesting position with respect to the modern world because if you look at things like COVID, climate change, inequality which are the existential crises of the 21st century, which we have to address. Well, you'd have to be an absolute idiot to believe that you could solve climate just knowing geochemistry, right? [chuckle] It's clearly a human problem as well. It's a policy problem, it's a politics problem, it's economics problem. So by virtue of our interest in the synthesis of these disciplines we're almost in a unique position actually, I think to tackle problems that have that character which is basically all complex problems as far as I can tell. And COVID was the great wake up call for people. Because everyone thought you know we'd listen to Fauci and it was epidemiology and immunology, but what was really happening on the streets is, Well, my kids need to go to school, or I need to make a living. I can't close my restaurant for two years or and so on. And so bringing all those together in some rigorous way is not just intellectually interesting, it's existentially urgent. [laughter] And I think it's your point. That's why this, the Eye of Sauron [laughter] is now looking at us and others like us to to provide some answers.

1:13:10.7 SC: Well, this is as you've emphasized correctly I think you know one of the fun but challenging parts of complex systems is that in physics we have collective behavior but the individual pieces that are collecting are pretty simple themselves and in society the individual pieces are themselves complex. I had a great podcast with Jane McGonigal, I don't know if you know her?

1:13:30.0 DK: Yes.

1:13:31.9 SC: Game designer.

1:13:32.9 DK: Games, right?

1:13:35.4 SC: Sorry.

1:13:36.4 DK: Computer games.

1:13:38.7 SC: Yes that's right. But she's designed ways to use game-like things to basically war game scenarios like pandemics et cetera and to discover features of human behavior that you might not have guessed in the model. And it's part of the development of tools. You know we didn't even talk about agent-based modeling and other things, but you did mention that the idea that we would have to use computers to simulate things was there at the very beginning of this whole complex system's talk.

1:14:06.0 DK: I mean one of the interesting things that's happening and I'm curious, we had John Baez here recently to give a talk right? And his talk was entitled something like The Future Physics. And it was quite amusing this talk. So he ends the first third of his talk by saying, "There's nothing really interesting happened in physics in theory since 1980." Fundamental theory simply hasn't been tested. It's not that it's not being developed but no new testable predictions have emerged. That's the first part of his talk. So let's give that up. The second part of his talk was anything interesting happening now is in condensed matter. And he talked about excitons and phonons and you know effective particles. And that's fun. And then the third part of his talk was what we really need to be working on as physicist is climate change. [1:15:01.4] ____ We hear the economy, right? [laughter] And it was sort of slides that he pulled off Wikipedia that he then presented back to us. It was kind of fun in and bringing [1:15:10.0] ____.

1:15:13.2 SC: Yeah I was gonna say.

1:15:13.7 DK: You're right. And I... But I thought that was telling that and I'd be curious to know how you feel about this Sean now that you know theoretical physicists who had a really good run. Because it's so fascinating mathematically logically and so predictive and that's sort of kind of over where physics moves and are you all going to become complexity scientists?

1:15:41.0 SC: Right. So it's not over. And it's not true that there's been no, the statement that there's been no major progress in theoretical physics, fundamental theoretical physics since 1980 is adjacent to a true statement, but it is not actually a true statement. I mean there's been no model building that correctly maps onto reality that we didn't have in 1980 you know still general relativity and the standard model of particle physics and the Big Bang theory, enormous insights into the workings of those theories and enormous insights into possible speculative models beyond those theories. I mean just the black hole information problem all by itself has been an enormous font of interesting ideas, holography and things like that. But this is the subject of an upcoming podcast. So I'm not gonna give away too much right now, but I completely sympathize with the claim that we've not made as much progress.

1:16:43.2 SC: Part of that I think is because the first half of the 20th century was absolutely unique in the amount of progress we made in fundamental physics and we got spoiled. Like it was so much that the whole second half of the 20th century could be used just fixing up what we learned in the first half. And now it it's harder. But I absolutely take the point that it's harder to make progress in these areas. And I think that one I've said this explicitly so I'll say it again. There's three things we can do. One is keep trying. You can keep proposing models. Maybe the dark matter is this maybe the hierarchy problem is solved by that. And maybe you get lucky. You never know. Like when Weinberg did electroweak unification, he didn't know it was even very promising. He got a little bit lucky but he was smart enough to do enough smart things.

1:17:34.2 SC: He was gonna get lucky eventually. The second thing you can do is move into complexity science or biophysics or econo geophysics or whatever. You can move to other levels of the hierarchy other than the fundamental physics. And I think that is part of what I'm doing, right? But the other part of what I'm doing which I also want to plump for is, you can take a step back and think about the foundations of your field. In that excitement of the early part of the 20th century we rushed past some really big things. Whether it's information theory or quantum mechanics or whatever that I wanna be a little bit more careful and philosophical about. And that still gives us room for major breakthroughs.

1:18:14.3 DK: It's so interesting you say that because this has always been one of those things that surprised me about the way science works. If you think about Darwin's theory, so Darwin by and large an empirical narrative account and beautiful and amazing, right? But that's how it... Now it get parts that get mathematized using dynamical systems. But now we have information theory. Now we have very interesting theories of computation. We could go back and just do it again, [laughter] and instead of saying let's just add epicycles, which is what we tend to do which is okay. So this idea that science should recreate itself historically from time to time based on all sorts of progress is really fascinating. And I don't think it's sufficiently done, so I'm very interested in that point.

1:19:11.0 SC: Well, it's not sufficiently done. I do wanna mention one little thing that I did, which is probably a paper you haven't read. It's not really in your list of things to do, but when you were talking about individuals and how they persist in some way passing on information from generation to generation. I wrote the quantum mechanics version of that, [laughter] not about individuals, but just asking how can you carve up the world into subsystems? We do in quantum mechanics, we treat the world as subsystems, then we glue them together. But we take the subsystems as given. We don't ask ourselves the inverse problem. Why did we carve it up that way? And so Ashmeet Singh and I wrote a paper called Quantum Mereology. Mereology being the relationship between holes and parts. And guess what, they're exactly like you said for your information theory of individuality. There is a set of criteria. It has to do with entropy. And there's a thing you minimize and that tells you where the subsystems are.

1:20:13.1 DK: Yeah. No, that's right. But again, does this point towards a greater unification and are there generalized theories of ordered states that transcend the living and non-living worlds? And as there are in the second law, that's the other end. There clearly are. And so, one of the things I think has been fascinating to me in this, as I've learned about complexity science, is how this ouroboros turned back on itself. That the study of fundamentals turns out to be genuinely valuable in practice. And when I was growing up, probably you too. It was, look, if you're gonna do fundamental stuff, you are in the South Pole, but if you wanted do applied work, you have to go to the North Pole. And it's a long track [laughter], it's a long distance and a long time. But what I've discovered, and this has been very reassuring I think, is that in the domains that we work, it's not necessary a very long track. You can tunnel.

1:21:25.3 SC: Well, look, Sadi Carnot wanted to build a better steam engine.

1:21:29.2 DK: That's right. No, but that hence that exactly it, so that's... The industrial revolution is such a beautiful example because all of this deliberation about the designed universe is the progenitor to all that we're discussing today.

1:21:44.0 SC: I do wanna give a last chance for sort of grand pronouncements on the future of complexity. But the one thing I wanna get in here before we get to there is intelligence. 'Cause this is what you've been thinking about and working on, and we've talked about agents and individuals and the origin of them and even purposes. But we throw around words like thinking and planning and processing and intelligence and cognition. And presumably within this big framework that we've mapped out, those are also things that emerge along the way. How much do we know about that?

1:22:17.7 DK: Yeah. So that's sort of my core research project. And I guess I'm interested in it more the way a physicist is interested in these things. 'Cause I'm interested in it as a universal phenomenon, as I am with life. It's not, I'm not studying intelligence as a, like a psychologist would, but as an inevitable feature of the universe perhaps. There's so much to say about this, I wouldn't know where to begin. But nowadays of course just, there's an elephant in the room and that's large language models and AI. And actually Melanie and I just wrote a big paper on large language models, and our conclusion without going through all of that was that we need a much more pluralistic attitude to what we think of as intelligence and all its kind of constellation of related concepts. Like meaning, like understanding, like consciousness. I think my effort has been on the one hand to demonstrate or to articulate what is the general character. And the general character to me is basically problem solving algorithms. And that can be conscious or unconscious. And the history of culture is the amplification of that capability through time. We are more intelligent by virtue of the culture we live in.

1:23:57.1 DK: But we've reached this very interesting moment where tools have gone rogue in going from the slide rule and the abacus to the HP 65, and which Feigenbaum worked out the bifurcation value with. To large language models, something has happened in the world of cognitive artifacts. And my theory of this, and I again, this is perhaps too much for now, is what I call a transition from complimentary cognitive artifacts like slide rules and the abacus, which genuinely amplify your capability with or without them actually, to a world of competitive cognitive artifacts that exclude us from deliberation. And so intelligence has now become as it was, I guess at the beginning of the 20th century with Binet and Jensen and Pearson and others working on IQ tests and all that nonsense. We've returned to the ethics of intelligence in the last, I don't know, several months. And I think there are general theories for this, in terms of the opacity of the mechanism of action. And so LLMs are like the GPS and not like a map, right?

1:25:32.3 DK: So a map is an augmenter because I can give you a map, and you can look at it, Sean, and say, I can take it away from you and you still preserve elements of that map in your imagination. Whereas with the GPS, clearly that's not true. And I think, so there is in this current moment, a way to express what an LLM is in terms of very general principles of cognitive opacity. And that's been something I've been thinking about.

1:26:02.5 SC: Wait a minute. I think I understood everything you said except the relevance of the phrase cognitive opacity, what...

1:26:07.7 DK: Oh. Which is the following. So we now know that multi-generational, collectively constructed representations like number systems can be internalized by single individuals. Right? So no one person invented the Indian Arabic number system, but you have it in your head.

1:26:33.0 SC: Yeah.

1:26:34.6 DK: Okay. So there's this very interesting dynamic that goes on. And part of why that's possible is because you can reason through numbers. You know what their compositional rules are, you know what placed valued numbers are and so on. And the same is true of the abacus and so on. With large language model, that's not true. There is... You could use that forever and never internalize it.

[laughter]

1:27:08.3 DK: Why is that? And it's not just a dimension. That's why mechanism is so important in scientific theory. It's the reason you were able to learn classical mechanics, general relativity is 'cause you deliberated through the mechanics. That's how you came to encode it internally. You didn't just copy and paste it from Jim Hartle's textbook. [laughter] That's not how learning works. Whereas in this case we can't. And so it occupies a category of... And well that's why I say cognitive opacity.

1:27:47.2 SC: Yeah. Now I get it.

1:27:47.9 DK: The other artifacts also occupy incidentally that aren't as powerful like a GPS. And theorizing about the properties of artifacts that can be decomposed and replicated in the mind's eye versus those that cannot is a theoretical challenge.

1:28:08.9 SC: I like it and it's a great example of how this highfalutin theorizing matters to issues of the real world that we're gonna be confronting like it or not, very soon.

1:28:19.6 DK: It is. And interestingly in 1950, I don't know if you know this philosopher, I love him, Henry Margenau.

1:28:26.7 SC: I know his name.

1:28:27.3 DK: He wrote a book called The Nature of Physical Reality. He tried to elaborate on the concept of the schema and he called it the construct. I'm very fond of this idea. And the basic idea is, that a good theory should have certain properties and he enumerates them. And I [1:28:48.5] ____ didn't go through it, but things like, they should be causal. They should be minimal. They should be extensible. So you can imply that the concept of entropy to a bridge as well as to a body, right? They should be fertile. By that he meant composable. Anyway. And it turns out that all good theories have construct properties and it helps us learn them. LLMs aren't... They might contain constructs. But we don't know. And so yes it... If you are interested in human flourishing, then I think the kinds of things that we do matter that way.

1:29:34.1 SC: I realized that in fact, I was reading Henry Margenau's name earlier today in the context of him complaining about what a terrible theory the Copenhagen interpretation of quantum mechanics is. [laughter]

1:29:44.7 DK: Great. That's so good. Yeah. Murray would've loved him [laughter]

1:29:49.4 SC: Yeah, exactly. Okay, well I'm sorry we didn't get a chance to go on about intelligence for hours, because I'm sure we could, but we'll point to your webpage and people can find the work, but, okay, so let's... We've covered a lot of ground. I guess I wanna wrap up with the big thoughts on the future of the field. I have the impression that 40 years ago, the founding of SFI, there was this idea that we would sit down and define complexity and then find the rules that it obeyed. And whether or not that has actually happened, a lot of the success of SFI or other institutes around the world has really been in a more piecemeal approach to understanding various obviously complex phenomena and thinking about them in a more general way. Do you foresee a ultimate coming together in coherence of the theory of complexity on a T-shirt? Or will the theory of complexity itself become evermore complex?

1:30:56.1 DK: I don't know. [laughter] But I'm not dodging it. I think it's a very good question. It's just that I do think if I look at the work of Ulam, von Neumann, Conway on automata, I think, God that's rich. And now more recently Stephen Wolfram. That's fascinating. You look at the work on hydrodynamics and chaos and you look at the work on now, increasingly, non-equilibrium statistical mechanics advances in evolutionary dynamics, it does feel as if there's an awful lot of synthesis to be done. And we have meetings as you know, you come to many of them where there are tantalizing clues, that connect the established bridges between these fields. So I think there will be a lot of unification. I don't think it will be on one T-shirt. And I think that's because we're not in the Warren Weaver world.

1:32:00.7 DK: We're not in the world of Maxwell's equations, we're in that world that Kolmogorov described as having large description length, and it might be in your wardrobe, might be spread over several different items of clothing including your socks, but I don't think it will be on a single t-shirt. And I think hence our interest in algorithms, in code, and alternative formal languages. That's one side of it. So I'm very optimistic about synthesis, but I don't think it will be in that sort of super Occam form that we've got used to. But as important as, and I know this again is an issue of yours, a deeper, more principled understanding of society, of political institutions, economic structures. It's quite clear that we got here by accident. We stumbled into the modern world, and we're stumbling all over it. And I do believe that new ideas are required, actually, and I'm not saying they'll just come from science, I'm sure they'll come from philosophy, and politics, and literature but that kind of consilience, I'm also optimistic about. And as long as we're generous with other fields, and not excessively epistemologically greedy which I think we can be, actually.

1:33:27.3 SC: Well, that does lead me to the last question that I have, since as well as being an accomplished scientist, you're also the president of the Santa Fe Institute, which is... I'm sure almost all my listeners know about it, since I'm fractal faculty there, but it is a little utopia in the mountains where people come together to think about large interdisciplinary questions and to me, the success of this approach is just obvious, and compelling, and intoxicating and yet it has not taken over the world. There's other places that are still resistant to thinking about complexity and complex systems for its own sake. Do you perceive reasons why that's the case? Is this just the stodginess of academia in general, or is it something deeper than that?

1:34:14.6 DK: I think there are many factors, many, and some negative and some positive. The negative ones are obvious, right, territoriality, resistance to change, fear of not being the master kind of thing, which is a professional defamation of experts. On the other side, SFI is a kind of weird laboratory that generates variants, but doesn't breed them. We don't scale. If we scale, you'd hate it, but it wouldn't have that delightful property that you like. We all know each other, we all bump into each other, and I think that what happens at SFI very naturally is that successful projects go elsewhere to scale. And the problem with that institutionally, it's not a problem, I don't think, for SFI, but to answer your question, Why is it rare? Is that money moves towards larger projects closer to execution. And so wear a small, scrappy little outfit in the mountains, a bunch of weird monks, nuns running around on a dime.

1:35:36.8 DK: And I think that's important in a way. If a project were to become very successful, you'd have to... We don't allow groups, for example. Groups are prohibited. There can't be the Sean Carroll group or something. There's you and there's me and there's a bunch of our friends and we argue with each other. If you want to go and scale your research project, which you might need to, incidentally if you're working in neuroscience or God knows what, but then you go somewhere else and I think that the culture of parsimony and early phase venture is not what funding particularly likes or what scientists particularly like. And that's okay because I think it's dispositional and you're a good example. Look, Sean, you have the best of both worlds, right?

1:36:35.0 DK: You can move through... I think I have some congenital dislike of large institutions, so I don't go there and then that's probably true for some of my colleagues here who are here the whole time. But so I think that combination of the way that science is supported, the way it's rewarded, largely through scaling, and then the disposition to enjoy the uncertainty of startup, which is what we are, those and you multiply them together with some other terms and then you explain why we're rare.

1:37:11.4 SC: I like it. I like it very much. I do have the best of both worlds, so I am fortunate and I recognize that I'm privilege, but here's to the scrappy band of misfits of the monks and nuns running amok. I think that great things are gonna be coming in the future. And David Krakauer, thanks so much for being on the Mindscape Podcast.

1:37:24.3 DK: Wonderful. Thank you, Sean.

[music]

9 thoughts on “242 | David Krakauer on Complexity, Agency, and Information”

  1. Re: Maximum feature emergence between low entropy and low Kolmogorov complexity

    Sean,

    I noticed from this podcast with David Krakauer and from a Gifford (?) lecture you gave some time ago that the question of how to analyse the arc of the complexity spectrum is something that interests/bugs you. I have written a paper which does that.

    In a nutshell, if a model or classification is defined as a partition of the Cartesian space of the underlying variables then the optimal model is where the sample data points are concentrated in small components and rare in large components in a way that can be defined explicitly in a entropy expression similar to relative entropy. Essentially the expression is optimised where the derived variable entropy is low (between components) and the underlying variable entropy is high (within each component). The paper shows that artificial neural networks, for example, maximise this expression and that is why they work. I have implemented a model search algorithm based on a statistic similar to mutual entropy which also maximises the entropy expression.

    The paper and the implementations are at my (non-commercial) website greenlake.co.uk.

    Cliff

  2. Pingback: Sean Carroll's Mindscape Podcast: David Krakauer on Complexity, Agency, and Information - 3 Quarks Daily

  3. I greatly enjoy the Mindscape podcast in general, and I think Sean did his best to makes this conversation fruitful, but unfortunately, Krakauer is a terrible interviewee. Yes, there is a difference between the study of things that do not encode information about the world and the study of things that do, but beyond that, what are the similarities across complex systems such that there could be anything called “complexity theory”?

    Sean asked several good questions — about emergence, downward causation, and so on — and Krakauer’s responses were little more than dodges. Consider Krakauer’s response to Sean’s question about strong emergence and new laws of physics at the macro level that influence what is happening at the micro level:
    “I’m against it and I’ll explain why I’m against it …. it’s greedy. Those new laws of physics are called English literature, [laughter] or musical composition or metaphysics or carpentry. And it’s not a new law of physics, it’s a new theory. And it might have laws in it, I’m not sure, may have rules in it, might be a little bit more modest….”

    Just no. You can say we don’t really know the answer to the question, that is, we don’t know how macro states such as the mental state a poem puts me in might lead to physical changes in my brain and body, but to say, “well, it’s greedy to try to answer the question, we have English literature and what not!” is empty rhetoric, and rhetoric that won’t work on anyone paying attention.

    Another frustrating tendency Krakauer has is to hide behind names of other authors. When you are asked a question on a podcast, you have to answer it, not bring up 17 people who may or may not have had anything useful to say, without ever explaining what it is that they actually said. The name-dropping really came off as smoke and mirrors.

  4. Ms Levy — Your comment about name-dropping reminds me of a remark by Mark Solms (cofounder of neuropsychoanalysis) in a podcast – something along the lines of
    “If I were Robert Sapolsky, I’d not only give you some great answers to your question, I’d also refer to a dozen experts and tell you what THEY’d say.
    “I don’t have enough time for that, so I’ll just tell you what I think…”

  5. Thanks for reminding us of your Gifford Lectures! Diving into them now.
    One of my favorite (and most re-read) metaphysical inspirations, is Alfred North Whitehead’s GLs from a century ago, the series painfully re-edited into the 1979 corrected edition of “Process and Reality.”

    I must re-work my ideas of complexity, in light of Prof. Krakauer’s dialog with Sean! Whatever is most paradoxical, most surprising, and most broad in scope is (to my taste) exactly the right place to start in ‘scoping out’ any field. And that’s just what I see in Krakauer’s analysis, branching out specifically from his insistent and colorful rejection of “strong emergentism.”

    GREEDY is exactly the right word for the aggression of intellectual imperialists! – Perhaps to be supplemented by “arrogant,” “bigoted,” “infantile,” “pathetically condescending…”
    Rafael Nunez splendidly points to the intellectual crimes of Kronecker against Cantor, in “Where Mathematics Comes From.” We could easily add the crimes of Heisenberg/Bohr against Schroedinger and Einstein (as illuminated by Mara Beller’s merciless dissection of Bohr’s public bullying, in “Quantum Dialog”).

  6. (A little tongue in cheek here, but I think it should be mentioned.)

    Some galaxies think; therefore, they are!

    Complexity Theory
    Should (also) in some way bring greater clarity to the apparent ontological and epistemological disconnects:

    Das ding an sich, quantum uncertainty, incompleteness, More is Different ….

    The ineffable aspects of reality, i.e., beyond representational access, foreclosing on complete before the fact prediction, as manifest in some reduced model (digital or otherwise).

    The complexity discussion seems to focus on Biological Complexity, and emergent representational calculators, able to perform predictions better and better.

    Nevertheless, it seems a reasonable speculation, that biological complexity is in service to what appear to be fixed overarching physical principles, and the locally emergent manifestations of those principles.

    That said, thanks much for the discussion, really appreciate the erudition and effort. More is better!

  7. The discussion was very enjoyable and fascinating, even though I am certainly not a mathematician or theoretical physicist! However, as an Earth scientist I am equally allured by complexity and emergent patterns. I am also a fan of the concept of and kind of work done at SFI. As a suggestion for a future guest, Sean might invite Eric Smith, who was there a while ago, and who wrote a tome with Harold Morowitz on the origins of life on Earth. That would entail discussions for more than one podcast, I suppose!

Comments are closed.

Scroll to Top