Our universe started out looking very simple: hot, dense, smooth, rapidly expanding. According to our best current model, it will end up looking simple once again: cold, dark, empty. It's in between -- now, roughly speaking -- that things look complex. I have been working to understand the stages by which complexity comes into existence, thrives, and eventually disappears. Without going into technical details, in this solo episode I give an overview of the general picture and the clues we are looking at to better understand the process of complexogenesis.
Support Mindscape on Patreon.
The Santa Fe Institute has recently published a four-volume set of classic papers on complexity. David Krakauer provided a comprehensive introduction that has been published as a standalone book.
Click to Show Episode Transcript
0:00:00.4 Sean Carroll: Hello, everyone, and welcome to the Mindscape podcast. I'm your host, Sean Carroll. Podcasting, like the subject of today's episode, is a complex system. Many things happen. You cannot always know what is going on. Sometimes the schedule kind of gets away from you and you decide that this would be the right time for a solo episode. This is a fancy way of saying that I'm behind on actually recording episodes because of various things that happened. So why not just do it myself? That's always a strategy that is available to us. And I'm recording this from Santa Fe, New Mexico, where I'm doing one of my regular visits to the Santa Fe Institute as part of being a fractal faculty there. So complexity is on my mind. We had a very nice meeting just last week on science and history, both of things which involve complexity in different ways, and it was stimulating to hear historians and scientists come together. But what I've been thinking about for a long time is complexity and the universe. And I know that in bits and pieces, I've talked about this in AMAs and other solo episodes, even in books and things I've written. I've even given talks on it. You can find talks online on YouTube that are pretty close, similar at least, to what this solo episode is going to be like. But I thought it would be good to take a step back, not really talk about specific individual research level ideas. I have some of those, but they're very vague and they're not very far along right now. So rather than that, talk about the big picture of this question of how complexity comes to be in the universe.
0:01:36.9 Sean Carroll: Some of you may know I already wrote a paper on that topic with Scott Aaronson and Lauren Ouellette quite a while ago, and we still haven't published the paper, but we're still working on that. It's like 10 years later. Don't worry, we'll get there. Science doesn't care when you actually publish it. It cares about the truth. But there's many, many places to go beyond what we did in that very simple paper, which I'll describe later in the episode. So, there's a lot of fronts on which one can attack the problem of how complexity comes into existence in the universe over time. And I'm saying this as someone who knows a lot about this subject in some ways, but not nearly everything in other ways. I've not been doing complexity research all my life. I have been doing universe research all of my professional life, so I know more about cosmology than the average Complexity person, more about fundamental physics less about non-equilibrium dynamics and computer science theory and statistical mechanics and complexity theory and all those things. So, we're trying to put them together in a novel way and we'll see what happens. So I thought that would be fun to just sort of lay out the general picture as I see it and also places we're hoping to go, questions that are still open things, we're trying to learn about, calculations maybe we would like to do, ideas to keep in mind as we're doing all of these things. So that's what this episode will be about complexogenesis, I sometimes call it. We talk about baryogenesis the origin of baryons in the universe. Complexogenesis is the origin of complexity in the universe. The very very early universe was a simple place. The current universe is a very complex place, at least in parts of it. How did that happen? Is there any scientific, quantitative, rigorous understanding we can put to that. I'll give you my take on that.
0:03:27.6 Sean Carroll: Other people will have their own takes. Let me also take this opportunity to quickly say that you could be a patreon supporter of the mindscape podcast. Just go to patreon.com/seanmcarroll and join up to support the podcast. You get a lot of benefits including being able to ask me anything questions that we do once a month, and also there's other ways to support mindscape that are completely free, like leaving reviews at iTunes or Spotify or I don't know, wherever one leaves reviews of podcasts. Spread the word, let other people know that this podcast is worth listening to. We've been doing it for a long time now, I enormously appreciate the support that the mindscape audience has given to the podcast. And so with that idea, let's go.
0:04:30.8 Sean Carroll: Of course, if you want to talk about the origin of complexity, the very first thing you have to do, or one of the first things, is tell me what you mean by complexity. Give me the definitions of complexity. And people argue over that, of course, there's multiple definitions out there. I kind of don't care. One of my goals in this presentation is not to tell you what the right definition of complexity is. It's to take the fact that complexity has all sorts of different aspects to it. Some we know when we see, others we perceive by thinking about it more carefully, and include all of them. There's this picture, as I'll say, where the universe starts without any complexity at all, in a very real sense, and all sorts of different kinds of complexity develop over time. And we're going to see how that happens as a set of stages, bit by bit. You get certain kinds of complexity develop, and then maybe other more sophisticated kinds happen later on. Not everyone sees it that way, of course. David Krakauer, who is the president here at the Santa Fe Institute and former Mindscape guest, made the point in our conversation that he considers complexity to be sort of real complexity only when a system can be considered to be teleonomic, that is to say, to have some goals of its own. The picture being that at some point in the history of the universe, physical systems develop the capacity for having information content gathered and thinking about the future and moving towards some goal, and those are all characteristic of complexity.
0:06:06.6 Sean Carroll: And he's worried a little bit that if you just include everything in the definition of complexity, even things like spin glasses that Giorgio Parisi recently won the Nobel Prize for, then it all just becomes a subset of physics and you're missing important things. Well, so my attitude, which I don't think is substantively in disagreement, but we put our emphases in different places, is that everything is a physical system. There are no non-physical systems. There are different ways of talking about physical systems, and some of those ways might be biological or mental or whatever, and there should be a unified picture. I'm very interested in both the fundamental levels of reality and the higher emergent levels, and in particular, understanding how they're compatible with each other, how the higher levels are constrained by the fact that they supervene on the lower levels in some very real way. Furthermore, I think that this idea of teleonomic matter or advanced sort of complex systems that can adapt to circumstances and things like that is great and important. You need to get there, but you're not just going to leap into it right away. You're not just going to have some random collection of molecules, or at least the best way to make collections of matter like that or not have random collections of molecules simply spontaneously organize into that. It's going to happen by stages, and so we'd like to understand what those stages are, even if the earliest stages don't look teleonomic or information processing at all.
0:07:36.0 Sean Carroll: Okay. With that little throat clearing out of the way, and again, without even defining quite yet what we mean by complexity, let's think about the evolution of complexity over time. We think that some parts of the universe do grow more complex. The biosphere grows more complex over the past 400 years. The universe grows more complex over the past 14 billion years. Again, not because there is some goal of doing it, not because the laws of physics direct it to happen, but through some features that are importantly dependent on both the laws of physics and the initial conditions. This is why cosmology is relevant to talking about complexogenesis. There's an initial value problem here. Those of you who listen to me or to Mindscape many times will know that it's a very parallel discussion to the discussion of entropy and the arrow of time. Entropy is a feature of macroscopic collections of matter. One of the various ways of defining entropy, there's many ways of defining entropy, just like there are many ways of talking about complexity. Boltzmann's definition, the definition that is on Ludwig Boltzmann's tombstone, is to say that there are certain things you can observe about a physical system, and there are many, many different configurations of the microscopic constituents of that system that are compatible with those macroscopic observations. And so let's chunk up all the possible microscopic states of the system into macrostates. Macrostates are sets of possible microscopic configurations that look the same to us macroscopically. Then it is a feature of the world, which we can try to explain, but that's a job for a different podcast, that at early times, soon after the Big Bang, by the standard way of thinking about macrostates and microstates in the universe, the universe had a very, very low entropy.
0:09:38.0 Sean Carroll: The status of why that's true is certainly very interesting and not something we're going to go on to. We're going to just take it as given for this particular discussion. And then simply because there are more ways to be high entropy than to be low entropy, it's the most natural thing in the world if entropy increases over time from that initial low entropy past to the future. We are nowhere near done in that process. Roughly speaking, we don't know about the future of the universe because there's things we don't know about physics and conditions and things like that, but we have a standard picture of what the universe might look like. And roughly speaking, about 10 to the 100 years from now, we will reach maximum entropy, a thermal equilibrium state of the universe. And right now, we're only 10 to the 10 years after the Big Bang, so 10 to the 100 years is very far in the future. But the rate at which interesting things happen slows down. So even though we're only a tiny fraction of the beginning of the history of the universe, a lot of interesting things have happened in those 14 billion years. To cosmologists, 14 billion and 10 billion are the same number. So 1.4 times 10 to the 10, and 10 to the 10 are not numbers we need to worry about distinguishing, cosmologically speaking. So sometimes I'll just say the universe is 10 to the 10 billion years old. So the reason why I'm retelling that story is both because it's important to the complexity story, but also it sort of parallels the kind of discussion we have, because we're not introducing new laws of physics. We're asking, what are the features of the laws of physics that give rise to this behavior? And certainly the initial conditions of the universe are playing a very important role here.
0:11:20.7 Sean Carroll: Now, in the biological case, just to prefigure or foreshadow or a little bit, again, there's nothing in Darwinian evolution that says you're supposed to go towards higher levels of complexity. We had Michael Wong on the podcast not too long ago, and he and Bob Hazen and their collaborators suggested a sort of law of increasing complexity, but it's only a provisional law that only is supposed to work under certain circumstances. And it's not even proven as a law. It's sort of a conjecture, if you like. And we're not going to get into that because we're thinking even bigger picture than they are right now. But the point is that it's well known that individual biological species sometimes lose complexity. Biological species are trying to adapt themselves to their environment. If a species that was living in the sunlight changes its environment, so now it's living underground and there is no sunlight, it might lose the ability to see. It might lose vision because it's using up resources on maintaining that little bit of complex structure that doesn't need anymore.
0:12:26.6 Sean Carroll: So there's nothing in biological evolution that says we need more complexity. Rather, in biological evolution, you're exploring a space of states, a space of genomes of biological creatures, where we haven't explored nearly all of it and we never will. It's too big. So there's plenty of room for discovering new biological innovations, and those can happen in part of the biosphere and not other parts. So, just like it's very natural for entropy to increase over time, it's also very natural for biological complexity to increase over time until you reach some sort of saturation point. And just like right now in the history of the universe, we're nowhere near that saturation point, right now in the history of the biosphere, we're nowhere near that saturation point. The difference is there can be events that are maybe not completely improbable that dramatically decrease biological complexity, catastrophes, whether self-imposed or imposed from somewhere else in the universe. It's not a law of nature, it's just a tendency, and sometimes that tendency can be reversed. So let's compare those two sort of stories, the story of increasing entropy and the story of increasing complexity. And again, I'm very honest about not having defined complexity yet. It's right now in the eye of the beholder, we can get to different definitions later.
0:13:49.0 Sean Carroll: The example I use, and I love using it, I'm gonna keep using it, and many of you already heard me use it, is cream mixing into coffee. So imagine in your head, for the lucky few out there who haven't heard me give this example before, a cup of coffee with coffee at the bottom and cream on the top. That is a low entropy configuration of cream and coffee because it's a very specific kind of arrangement. You can rearrange the cream molecules within each other and you wouldn't notice it macroscopically. You can rearrange the coffee molecules within each other and then you wouldn't notice it macroscopically, but you can't mix cream with coffee without noticing. So that's why it's a low entropy state. You can then mix it, you put a spoon in there or you just let it mix itself over time. Maybe it's in a mixer or something, and it will become all mixed up and now it's in a high entropy state. And the high entropy state is everything's mixed together. Everything looks perfectly uniform. It's very conventionally true of very high entropy states that they are featureless. Because if there were features in the high entropy state, you could sort of mix them together and increase the entropy. So the completely mixed cream and coffee situation is high entropy and voila, we have the second law of thermodynamics that entropy tends to increase in closed systems over time. From completely unmixed to completely mixed is a journey of increasing entropy.
0:15:20.1 Sean Carroll: Whereas if we think about the complexity of this system, and again, without defining it, we're just gonna follow our noses and say, look, when the cream and the coffee are completely separate, that's a pretty simple configuration because intuitively it was easy for me to precisely describe it to you. Namely, all the cream is on the top and all the coffee is in the bottom. Macroscopically, there's nothing more interesting going on. Microscopically, maybe I need to tell you the position and velocity of every molecule in there. So already we've learned something. There's something about complexity that is a coarse-grained macroscopic phenomenon. At the level of the microstates, at the level of the position and velocity of every molecule or atom or elementary particle in that cup of coffee, there's nothing that distinguishes the amount of information you need to convey the state of the system from one moment to the other, whether it's mixed or unmixed. It doesn't matter.
0:16:21.4 Sean Carroll: This is very much like saying that Laplace's demon doesn't know about entropy because entropy is a coarse-grained phenomenon. Entropy is an example of something I can say about a system given wildly incomplete information. And likewise, Laplace's demon has complete information, so it doesn't need to talk that language. Complexity is a similar thing. The reason why the cream and the coffee completely separate or is simple because there exists a highly compressed description that tells you everything about the macroscopic configuration. Likewise, when you've mixed everything together, and now the cream and coffee is all mixed and it's a high-entropy configuration, it is still very simple because, again, I've given you the complete macroscopic description. If you think about complexity, so here's one version of complexity is, first, coarse-grain the system. Ignore all the microscopic specificities that you don't really care about from your macroscopic point of view, and then ask me, how much information do I need to give you to completely specify the state of the system? That is one version of complexity. And in this particular example, even though it is low-entropy at the beginning and high-entropy at the end, it is simple, that is to say low-complexity at the beginning and also low-complexity at the end.
0:17:44.9 Sean Carroll: The punchline, of course, is that in the middle, where the coffee and cream have begun to mix into each other, and maybe you see some tendrils of cream and coffee or some swirls, some turbulence in there, something like that, there, to precisely tell you where exactly all of the cream and coffee, the different layers of darkness and brightness and so forth would appear to you in an image, would require a lot more information. That's when it looks complex. And this behavior is, I would say, quasi-robust. By this behavior, I mean the idea that in a closed system, you start with low entropy and entropy simply increases, but you start simple and complexity can grow and then decrease. That is quasi-robust in the sense that it doesn't have to happen, but it's a very natural thing for that to happen. Complexity can happen at medium entropy configurations of stuff, and in this case, it actually does. So that's interesting. The idea, very roughly, is that entropy increases but complexity comes and goes. Now, number one, that's certainly not a very sophisticated version of complexity. There's no teleonomy there. There's no substructure. There's no power laws. There's no hierarchical network or anything. All the various things that conventionally go along with discussions of complexity, none of that is there. What we're talking about is literally an amount of information needed to specify a macroscopic configuration.
0:19:18.8 Sean Carroll: And this is quite literal. You could actually do this experiment with the cream and the coffee, take a picture of it on your iPhone and save the images of the cream and coffee separate, halfway mixed together, and completely mixed together. If you all do it right, and you can do this, it doesn't need to be cream and coffee, it can be whatever fluids you like, as long as they're distinguishable, the image that you save on your phone of the medium entropy configuration where they're half mixed together will have a larger file size than the files of the simple configurations, where the cream and coffee are either all distinct or all mixed together, because there is a more efficient compression algorithm when the cream and coffee are completely separate or completely mixed together, because there's big parts of the picture that look the same macroscopically, and your compression algorithm that is JPEG or GIF or whatever, is taking advantage of exactly that. So this sort of very simple-minded version of complexity is literally tracked by how much we can compress the macroscopic information.
0:20:28.5 Sean Carroll: Now, there is a tension between the fact that entropy increases over time and the fact that complexity comes into existence in the biosphere. This is a well-known tension that has been exploited by people who want to teach creationism in schools. There's an argument that biological evolution is incompatible with the second law of thermodynamics. This argument is complete bullshit. It's very, very wrong, but there's still something that remains to be explained, so I'm going to be very, very careful and explicit about this. The fact that complex structures like you and me, like other animals and plants and so forth, come into existence in the biosphere is completely 100% compatible with the second law of thermodynamics. Even though the sort of intuitive everyday language gloss on the second law would say disorderliness increases over time. The tension is, if the whole universe is going through a process by which disorderliness is increasing over time, how can it ever come to be that things like you and me, which are exquisitely organized biological machines, would pop up in the mechanistic, non-teleological, not goal-directed evolution of ordinary physical stuff? It doesn't seem like the origin of life or the later evolution of life from simple single-celled things into complex multicellular things is an example of entropy increasing.
0:22:09.8 Sean Carroll: Now, the answer there is very well known to anyone who knows anything about this, which is that the Earth and the biosphere are not closed systems. I even said it when I quoted the second law. In a closed system, entropy increases over time. The Earth is not a closed system. The Earth gets light from the sun. And it's very, very important that the sun is a hot spot in a cold sky that provides a source of energy, but it's a source of low-entropy energy. The Earth gets light from the sun, it does things with it, and then it gives back the energy to the universe. And it gives back the same amount of energy, roughly speaking. These days, it gives back a little less because of global climate change. We are keeping a little bit more energy than we give back to the universe, but that's a tiny perturbation on the overall flux of energy. The important thing is that we give that energy back to the universe in a much higher entropy form. For every one photon we get from the sun, which is typically a visible light wavelength photon, we give back 20 photons to the universe, 20 infrared wavelength light photons, and that's 20 times the entropy, roughly speaking.
0:23:25.5 Sean Carroll: So, even though it is true that if you ignored the flux of radiation from the sun and then back to the universe, the biosphere coming into existence represents a decrease of entropy. It's not a net decrease of entropy in any sense whatsoever. It's parasitic upon the fact that the whole picture, including the light we get from the sun, is absolutely increasing entropy over time. It's exactly like saying the second law of thermodynamics does not prevent you from cleaning up your room. Cleaning up your room lowers the entropy of the configuration of stuff in your room, but it doesn't lower the entropy of the universe because you're doing work. You are sweating and cursing and whatever it takes, and if you were very, very careful about accounting for all the entropy, you would see that it's going up. Okay. So, when I say there's a tension between the existence of complex biological structures and increasing entropy, it's only an apparent tension. If you really understand what's going on with the entropy budget, there's no conflict at all. Nevertheless, if you've gone beyond the sort of culture war, political battles, about teaching evolution in schools and are just asking the science question, even though there's no contradiction with the second law to say that entropy is increasing and biological complexity is also developing here in the biosphere, it's also not obvious why it happens. It's allowed to happen, but that doesn't mean it will happen. It's a little bit trickier than that. There's the moon gets light from the sun and radiates it back to the universe, but it doesn't develop life in any obvious way. So, this raises the questions of complexogenesis. Where exactly does all that complexity come from? What are the necessary and sufficient conditions for these kinds of complexity to develop?
0:25:18.2 Sean Carroll: Okay. That's all warm up. That's all inspirational pep talk, and now we can start thinking about the universe more specifically, more seriously. 14 billion years ago, there was something called the Big Bang. There's a whole other discussion to be had about what you mean by the Big Bang. We're not going to talk about singularities in the beginning of the universe. We can start talking a few seconds after the actual Big Bang event if you want. We can talk about the part of the universe where we actually know something about it. We do know something about the universe just a few seconds after the Big Bang because of Big Bang nucleosynthesis. The early universe was a nuclear reactor and a fusion reactor that was turning hydrogen and neutrons into helium and other light elements, and you can see the effects of that, and you can predict exactly the relative ratios of protons to helium, nuclei, and so forth, and you can see in the current universe, in parts of the universe which have been relatively undisturbed by the appearance of stars and things like that, that the abundance of helium and other light elements today matches what we predict from general relativity and from our knowledge of the contents of the universe from those nuclear fusion reactions a few seconds or minutes after the Big Bang. So we know that. There might have been before that something like inflation or something like that, a period at a much shorter period of time, 10 to the minus 30 seconds or whatever, when the universe didn't have particles at all. It was dominated by some inflaton field. And that is more or less smooth and featureless, and then it reheats, as we say, and turns into this gas of hot particles. But we don't know that for sure, so we'll think about that. We'll keep that as an option. But I'm just letting you know that that's the part of the history of the universe which we don't have 100% control over.
0:27:07.3 Sean Carroll: If inflation did happen, so let's talk as if it did for a while, if you take the universe as it is today and you take sort of the volume that we can see, maybe 20 billion light years in every direction, and then you shrink that down under our extrapolation of the expansion of the universe given general relativity and its matter content, so you shrink it down to what it was in some tiny fraction of a second, and you say you claim that inflation happened. Let's imagine that it happened. You don't need to claim, it's not going to really matter for anything that we're going to say here except for one thing, which I'll be specific about in a second, but mostly it's just something that we can ask about right now. The thing about inflation is there's not a lot of particles moving around. It's just one big scalar field, and there's essentially no specificity to the configuration of that scalar field. It's very boring. It's not a lot going on. It corresponds to a low entropy configuration. In fact, roughly speaking, all of the entropy comes from gravity, comes from space-time.
0:28:15.2 Sean Carroll: This is one of the reasons why this whole discussion of gravity and cosmology is slightly complicated because cosmology, where you have the whole universe as your subject of interest, is a case where gravity matters, and gravity matters for entropy in particular. And entropy and gravity are two subjects which we don't have 100% confidence talking about. We have some knowledge of, given what Stephen Hawking did and Gary Gibbons did in the 1970s, so we're going to wave our hands a little bit. But all of this is to say it is perfectly adequate to our current purposes to say that the entropy of the universe during the inflationary era was maybe something like 10 to the power 10. It could have been 10 to the power 1. It could have been 10 to the power 20. None of this really matters for our current discussion. It's just what will be a very low number compared to what the entropy is a little bit later on. The reason for the uncertainty is that we don't know much about the specifics of inflation. We have lots of different possibilities for how inflation could have happened and so forth.
0:29:22.7 Sean Carroll: But let's just keep that number, 10 to the 10, as a number out there for the entropy at the very, very early times, 10 to the minus 30 seconds after the Big Bang. Then once you reheat the inflationary energy into ordinary matter and radiation, then in our observable patch of universe, I'm calling the universe, I'm going to be a little bit sloppy about this, I can't help it, sorry. When I say the entropy in the universe, I mean what I said before, which is the region of space that corresponds to our currently observable universe, perhaps extrapolated backward or forward in time. Okay? So I don't have any idea what the entropy is outside our observable universe, so I'm not talking about that. And even though I don't have complete observational evidence over what our universe was like in the very far past or the very far future, I can use our standard picture of cosmology to talk about what the understanding of entropy would be under that picture. So, if cosmology turns out to be different because of some future discovery, then we can re-have the conversation. But anyway, within what we call the co-moving volume, the volume of space that corresponds to the volume of space we can observe today, there are about 10 to the power 88 particles in the universe today.
0:30:40.9 Sean Carroll: Almost all those particles are either photons or neutrinos. How do we know this? Sometimes it's because of direct observation, like with the photons, they're mostly in the cosmic microwave background, and we can actually just detect them and count them. With the neutrinos, it's harder, but we can make a prediction, once again, based on known physics, and we can test that prediction against the data. The number of neutrinos, so you might say, look, neutrinos, you might have heard that neutrinos come in different forms. There are electron neutrinos and there are muon neutrinos and there are tau neutrinos. How do we know how many of them there are? And how do we know there aren't other kinds of neutrinos that aren't included in our current knowledge? These are excellent questions, but cosmologists are not idiots, they thought of these questions. They have a theory that predicts how many neutrinos there should be if there are only three different kinds, and that theory says, look, at very early times, there were roughly equal numbers of photons and each kind of neutrino, because they're created equal. These are all essentially massless particles. Neutrinos have tiny masses but compared to the energies in the very, very early universe, the mass of neutrinos is essentially zero. It's negligible.
0:31:53.4 Sean Carroll: What happens is there's various events in the history of the universe like electrons and positrons coming together and annihilating. We know that they annihilate into photons. So you create more photons in the universe but you don't create more neutrinos. So even though there are three kinds of neutrinos and only one kind of photon, we actually think that there are more photons in the universe than neutrinos. None of this matters. I'm just telling you that I'm not cheating you. I'm just trying to give you reason to believe that I'm not lying to you. We thought about all these issues. The point is there are roughly 10 to the 88th particles in the universe, and mostly photons and neutrinos. Why? Because they're light and they don't annihilate with each other and go away. So they're easy to make, they're hard to kill. That's why there's mostly photons and neutrinos. Things like electrons and protons and neutrons, well, neutrons are unstable. They just go away. Unless you capture a neutron in a nucleus, it's not going to last very long. So, of the heavy particles, we mostly have protons and electrons, and there are roughly 10 to the 80th of them compared to the 10 to the 88th of the photons and the neutrinos.
0:33:03.9 Sean Carroll: So, 100 million times as many photons as there are protons or electrons. Now, there could be dark matter in the universe. That's absolutely possible. How much of it is there? We know the density of dark matter in terms of grams per cubic centimeter. We don't know the mass of individual dark matter particles. If the mass is larger than that of a proton, which in most dark matter models it is, then the number of particles in dark matter is much smaller than the number of either photons or protons. So we don't need to worry about it. If the mass is much lighter, then it's a trickier story, but you might expect that you get approximately the same order of magnitude of light dark matter particles as you do photons or neutrinos. All of which is to say, the entropy of the universe in our co-moving volume, as far as our best cosmological models predict right now, is about 10 to the 88th. And that is true today. The entropy of the photons and neutrinos is about that. It was also true soon after the Big Bang nucleosynthesis, when we made all those light particles and we actually had some observational, when we made all those light nuclei, and we actually have some observational data about what was going on.
0:34:19.7 Sean Carroll: So all that is to say, if we're tracking the entropy of the universe over time, the entropy of our co-moving volume of universe, it starts at maybe 10 to the 10, it eventually, not too long, it grows to something like 10 to the 88th, because the entropy of a gas of particles is to within order of magnitude the number of particles that are in there, if it's a thermal distribution, which it is in this case. So the entropy goes up. It goes from 10 to the 10 to 10 to the 88th. That's good. That's what the second law of thermodynamics says should happen. And there's only two more events in the history of entropy in the universe that really matter. One is, that you make black holes. Stephen Hawking told us that black holes have entropy. There's a simple formula. If you have a million solar mass black hole, the entropy of it is approximately 10 to the power 90. The entropy goes like the area of the event horizon, which goes like the Schwarzschild radius squared, entropy goes like distance squared, and the Schwarzschild radius goes like the mass, so the entropy is roughly proportional to the mass squared of the black hole.
0:35:28.6 Sean Carroll: We think that big galaxies like the Milky Way, like other big spiral galaxies in the universe, each one of them has at the center a supermassive black hole. Supermassive means a million solar masses or more. The biggest supermassive black holes have entropies of masses of something like a billion times the mass of the sun. So if a single million solar mass black hole has an entropy of 10 to the 90, and the whole universe, back before there were any black holes, had an entropy of 10 to the 88, then entropy has certainly gone up because there's a bunch of black holes in the universe. The little black holes don't matter. They're subdominant as far as entropy is concerned because the entropy goes like the mass squared. And so, we can basically do an inventory of all the big black holes in the universe, again, to within cosmological precision, to within order of magnitude or two. The total entropy of the universe today, the co-moving universe, the co-moving volume in which we can observe, is about 10 to the 103. All of which is just to convince you that entropy is still going up. 10 to the 10, to 10 to the 88, to 10 to the 103.
0:36:36.6 Sean Carroll: Now eventually, if you keep going forward in time, those black holes will evaporate. In fact, all the matter in the universe will fall into black holes and the black holes are going to evaporate. This is what happens 10 to the 100 years from now. The last supermassive black hole, according to our best estimates today, will evaporate and go away. And then you might think the entropy is zero, or maybe you think it's big because of all the particles that were made from the black holes and stuff like that. Here's where we're in uncharted territory. We truly don't know. But because quantum gravity matters in these circumstances. The thing that I like to do, the estimate I like to put on the amount of entropy in the observable universe, comes from quantum gravity. It comes from at least semi-classical quantum gravity. Exactly as Hawking proves that a black hole has entropy proportional to the area of its event horizon. If you live in a cosmological universe with a positive vacuum energy, a positive cosmological constant, like we think we probably do, even though we don't know for sure, but we think we probably do, then there's a horizon around us. And that volume of universe with a horizon around it has an entropy proportional to the area of that horizon. And that horizon is big. It gives us an entropy of something like 10 to the 122. Something like that.
0:37:56.3 Sean Carroll: So again, it's higher. The entropy goes up over time from 10 to the 10, to 10 to the 88, to 10 to the 103, to 10 to the 122. So it's a story of increasing entropy. And then past that, you just have an empty universe. Nothing in it. De Sitter space, technically, because you have a positive cosmological constant, and maybe that lasts forever. Maybe there's some future cosmological weirdness. Remember, we had Katie Mack on the show a while back talking about possible future scenarios for the universe. Doesn't matter. We're perfectly content with thinking about only the first 10 to the 100th years of the history of the universe, and we have a pretty good handle on what could happen there. Okay. So that's just a reminder of what we know about cosmology. And as you know from what I just said, entropy goes up over time, and we even have an understanding of why. As I said, Boltzmann's definition of entropy says it's the number of microstates that fit into a macrostate. And I've told you what those numbers are. So entropy increasing in time is something that makes perfect sense to us.
0:38:57.8 Sean Carroll: So now we can start thinking about the complexity. And complexity is, we're using the I know it when I see it, kind of attitude. And I think that it's pretty clear what actually does happen at those early times. It's exactly like a cup of coffee. I'm trying to figure out the right way to say this. The early universe, as we currently understand it, was essentially featureless. It looked the same everywhere. It is exactly like everything... It looks high entropy if you didn't know about gravity. This causes a lot of people confusion. Even professional cosmologists, they say, "Look, the early universe looks like it's in a thermal equilibrium state. That's a high entropy state. But of course, it evolves into a much higher entropy state. How can that happen?" And they confuse themselves by thinking about the expansion of the universe, giving it more room to grow. But it's all wrong. The thing is that that smoothness of the early universe is actually low entropy because gravity was really strong. And there's much more room to make black holes and inhomogeneities in the configuration of matter when gravity is strong than when gravity is weak.
0:40:05.4 Sean Carroll: The early universe really did have low entropy, but it was also simple is the point. It's a low entropy and simple configuration. If I say, a second after the Big Bang, the universe is hot, dense, and smooth and rapidly expanding. As long as I attach numbers to how hot it was, how dense it was, how rapidly expanding it is, you're done. I've completely described the universe to you. If we imagine that we're allowed, for our present purposes, to think of complexity as how much information does it take to completely specify the macroscopic configuration of the system, the answer is a very, very small amount of information for the early universe. Now, skip ahead to the future of the universe. We said what's going to happen is all the matter in the galaxy is going to fall into the black hole, black hole is going to evaporate into a thin gruel of particles, the universe continues to expand, so even that thin gruel of particles sort of dilutes away to nothingness and we're left with literally nothing but empty space.
0:41:02.8 Sean Carroll: There it is. That's the description of the universe in the far future. It's empty space with a certain vacuum energy, with a certain cosmological constant. It's a very, very simple description. So again, simple at the beginning, simple at the end. Now, right now in the history of the universe is when the universe is complex, because if you wanted to describe what was happening in the universe today, you would have to tell me all sorts of details about galaxies and stars and maybe even individual planets and life on them and internets and books and podcasts and all these things. Those are all part of the macrostate of the universe and it's enormously complex. Even if I cannot give you a number, sorry about that. I would love to be able to attach quantitatively a number to that, but that is beyond my pay grade right now. That's something that is very worth thinking about trying to do. I can do it for the cream and the coffee. I can't really do it for the universe. It's too complicated. But clearly the phenomena are so dramatic that we don't need to be utterly quantitative about them to say that the universe has the same complexity growth curve that the cup of coffee does.
0:42:24.8 Sean Carroll: It starts low, it goes high today, and then it's going to diminish into the future. Everyone always wants to know when I say this, how close are we to the peak right now? But I can't answer that really precisely because we don't have this quantitative way of measuring it. It depends on your course graining. It depends like for the cream and the coffee, you can kind of just pick some scale at which you observe what's going on and then course grain within that scale. It's much harder to do with the universe. The dynamic range of interesting things going on in the universe is much larger than that. So, I am not able to tell you how close we are to peak complexity. Also, presumably because it depends on questions like how much life is there elsewhere in the universe? How much technological advancement? How much technological advancement could there be? Like, you and I are in a civilization that is really just in the beginning of technological advancement. So, if technology and interconnectedness of a culture contributes a lot to the quantitative complexity of the universe, then I have no way of measuring that right now.
0:43:35.5 Sean Carroll: Sorry. I can tell you one tidbit for what it's worth. You can forget about life and technology and things like that and just look at stars. The creation of stars is in some sense an example of a structure coming into existence that requires more information to precisely tell you how many stars are there, where are they, things like that. So it's a version of complexity. And you might think, well, maybe we're just at the beginning of the star-forming era in the history of the universe. And the answer to that one is no, we are not at the beginning of the star-forming era in the history of the universe. The peak star formation rate was about four billion years after the Big Bang. Most of the stars that will be ever formed in the history of our universe have already been formed. We are in a slow-down era in terms of the formation of stars in our universe. Stars shine for billions of years, the ones that were made almost 10 billion years ago, many of them are still shining, especially the low-mass ones. So we're in a star-rich universe but we're not making a lot more stars now. Most of the star formation has already happened.
0:44:48.3 Sean Carroll: Does that mean that we're past peak complexity? Again, that depends on details I don't know the answer to, but it's something to keep in mind. Okay. So the star formation thing is one factoid to keep in mind. Remember, I'm not presuming to give you the once-and-for-all comprehensive picture of complexogenesis or the evolution of complexity in the universe. We're groping toward that. This is how science gets done. This is science in progress. We're taking some facts, some data, some observations, some factoids, and we're asking, what is the bigger picture into which they all fit? So, peak star formation, okay, that's one factoid. Another is, think a little bit more in detail about this process of structure formation in the universe. The universe starts very smooth, simple. Galaxies and stars and planets form over time. Eventually they will sweep away. In early days of my era of cosmology, one would do simulations of large-scale structure which only included dark matter in them. And the nice thing about dark matter is that, number one, it's most of the matter in the universe, but also number two, it's simple. It's just gravity pushing it around. You don't get dark matter stars, supernovae, interstellar materials, magnetic fields, any of those complicated things. These days, a really good cosmological simulation is gonna include more than just dark matter, but back in the day you would just look at dark matter, and what happens is, you start with a box of dark matter in your computer, not real dark matter, simulated dark matter, which is more or less smooth but not perfectly, and what happens is as the universe expands and grows, gravity pulls together these slightly overdense regions into very overdense regions, and it evacuates the regions which are slightly underdense.
0:46:44.7 Sean Carroll: So, as I like to say, it turns up the contrast knob in the universe. From our perspective of apparent complexity, the complexity that you get just by looking at the system, complexity goes up under that process. But of course, it's a much richer story if you start including all the details, because you don't just increase the contrast, you start making new things that didn't exist before, not just galaxies but also stars, planets, et cetera. And there it's not just gravity, it's a balance of forces. And this is where things get really interesting in terms of complexity coming into existence. Gravity is kind of a dumb force. It's long-range, which makes it important in astrophysics and cosmology. It just accumulates the more matter you get. It doesn't cancel out like electromagnetism does. With electromagnetism you have positive charges and negative charges, so the Earth has no net electric field around it, but has a net gravitational field. And because there's only mass, there's not positive and negative charges, all gravity does is pull things together. If you have something like dark energy, it can push things apart, but in terms of particles or bodies, celestial objects, it just pulls them together. So there's not a lot of room conceptually for gravity all by itself to create truly complex structures. You get a little bit of an increase in apparent complexity, much like in the coffee cup, but you're not going to make a living being just out of the force of gravity. What you have in practice is that gravity does the initial work of pulling things together, but eventually other forces kick in. And this is, I mean forces in a broad sense.
0:48:31.3 Sean Carroll: I know that sometimes in particle physics we talk about the four forces of nature. We talk about gravity, electromagnetism, the weak nuclear force, the strong nuclear force. That's always been a fake. That's always been a sort of shorthand for saying that there are four different kinds of gauge bosons in the universe. But you also have the Higgs boson. Does that count as a force? Yeah. Kind of it does, kind of it doesn't. What about the Pauli exclusion principle that says that two electrons or two fermions more generally, cannot be in precisely the same quantum state. That leads to a force, it leads to the Pauli force, the electron degeneracy force, if you want to call it that, a Fermi pressure in some sense. That's, in some sense, the most important force in our everyday lives. That's what keeps solid matter solid, the fact that electrons cannot be in the same quantum state. Otherwise, atoms could just be exactly on top of each other. Atoms take up space because of the Pauli exclusion principle. That's a really important force in the scheme of things, even though it doesn't count as one of the four elementary particle physics forces.
0:49:42.7 Sean Carroll: The reason for that is that the word force is not actually fundamental in modern quantum field theory. There are quantum fields and they obey the equations that they obey. And we human beings later on in our macroscopic lives find it convenient to refer to certain things as forces and certain things as not forces, but who cares? That's not fundamental, that's not deep. In the broader conception of the word force, the fact that matter takes up space is crucially important. Planets congeal, coalesce out of the, I don't know, intergalactic, the primordial soup in some sense, and they only stop coalescing because they have pressure inside because they're solid. Stars coalesce, they stop coalescing for a different reason, not because they're solid, but because nuclear reactions start going on in the center of the stars. Those nuclear reactions give rise to heat, which gives rise to pressure, and you solve some equations. If you were an astronomy graduate student like I was, you would solve equations for hydrostatic equilibrium to understand stellar structure when you were in graduate school in a simple model. But in some sense, planets and stars look similar, but in some sense they're very different. They're supported by very different kinds of things, thermal forces versus simply material solid forces.
0:51:11.9 Sean Carroll: But I'm pointing this out just to emphasize that there's a transition of some sort, because we're taking clues from what we see in the universe, there's a transition of some sort from simply gravity pulling things together to an interplay between a simply attractive force like gravity and a repulsive force like the pressure that you get inside a planet or a star. And it's this interplay, this competition between two forces that allows complexity to really become interesting. If all you had in the world was gravity, you wouldn't make very, very complex, interesting structures, but we have a richer world than that. Easy to say, easy to point to that feature of the world, what we want to do is understand in more detail what is precisely the features of these competing forces that allow complexity to come into existence. I don't know the answer to that, so I'm not going to give you the answer, but that's the kind of thing we're thinking about. Okay so that was fact number two.
0:52:07.7 Sean Carroll: Fact number three, first fact was star formation slowing down. Second fact was competition of forces. Third fact is, the way that we're talking about complexogenesis is a particular way, and certainly my favorite way but not the only way. What do I mean by that? The way that I'm talking about it is to imagine that in the early universe, we can debate exactly about what the word early means there, but in some sense of the word early universe, there was a configuration that was pretty darn smooth, but not exactly smooth. It was not exactly smooth and therefore, over cosmological time, regions that were just a little bit more dense than average could coalesce under the force of gravity and become denser, whereas other regions emptied out, and that leads to an increase in the apparent complexity of the configuration. But the later complexity is sort of inherent in the earlier configuration. In the approximation where all of the relevant physics is classical, that's a pretty good approximation on astrophysical scales, but there will always be exceptions. You could be Laplace's demon. You can imagine that the underlying physics is deterministic, and this is how you can simulate. When you simulate large-scale structure, it's a classical simulation. So you're not really doing quantum mechanics there. You have a bunch of point particles and they have gravity, and maybe you can somehow simulate stars forming and supernovae exploding and things like that, but it's all mostly classical. And so, whatever complexity you get at late times was kind of inherent there in early times because the laws of physics are deterministic.
0:53:58.6 Sean Carroll: So what we're saying is that the initial conditions have all of the capacity for the complexity to eventually come into existence, and all that's going on is that the ordinary laws of physics are bringing to life that sort of potential complexity. This is a very different picture than someone like Stephen Wolfram would advocate. Now, Wolfram, another Mindscape guest, we didn't talk too much about this aspect of his work, but one of his famous claims is that complexity in the universe can be thought of as arising in a way analogous to... It's a little vague, but that's okay, you can be vague in the early days of constructing a bold new physical model. Analogous to cellular automata. If you know the famous pictures that Wolfram always has in his books and in his talks and things like that of these grids, these two by two grids where you start at the top with some initial condition and then you evolve downward in time because he's a computer scientist and you get black and white pixels lighting up according to some rule. And different cellular automata will have different rules. And what Wolfram was able to show is that you can start in a cellular automaton. from extremely simple initial conditions, ones that are basically if you have like a row of cubes or squares and the squares are either white or black, maybe all of them are white except for one is black. That's very, very simple initial conditions. And then you supply the update rule from the cellular automata to that and you get completely bizarre looking chaotic complex behavior later on. So, that is super interesting in its own right. It's showing how complexity can arise out of simplicity. But there all of the work is being done by the dynamical law, by the rule that takes you from one configuration at one moment in time to the next configuration at the next moment in time. There the complexity was not inherent in the initial condition. It's inherent in the rules. It's absolutely allowed to contemplate that the complexity in the real world comes from something like that. It's just completely incompatible with everything that we know about the fundamental laws of physics today.
0:56:25.7 Sean Carroll: The laws of physics as we know them are deterministic except for wave function collapse. Of course, that's a whole nother thing. But they're mostly deterministic. And so cosmologically speaking, if you evolve the universe from a few years after the Big Bang to a few billion years after the Big Bang, it's very ordinary deterministic laws of physics that are giving rise to the increased complexity because it's all there in the initial condition. So maybe a Wolfram-esque attitude is going to eventually be the right one. But I'm always in this mode of being, what John Wheeler called, radically conservative. You start conservative in the sense that you stick with everything you know about the laws of physics. But then you're radical in that you push them as far as you can. So, we're sticking with known laws of physics mostly. And in that picture, complexity does not arise from the laws. It arises from the initial conditions. Now, I keep hesitating and stumbling because the world is fundamentally quantum mechanical. The things I just said are only true classically. And quantum mechanically, things are a little bit different. Wave functions do collapse. Or if, like me, you're an Everettian, decoherence happens and the wave function of the universe branches. And it turns out that this branching of the wave function is not just something that we can say, yeah, it happens occasionally but it's not important.
0:57:51.5 Sean Carroll: In very interesting cosmological models, branching of the wave function is 100% necessary for the story of complexity that we're telling right now. In fact, this is a kind of a cool thing that I don't know if anyone has ever really emphasized when talking about cosmology. If you believe in inflation, if you think that the universe underwent this period of very early times and was dominated by some almost constant dark energy, some super high energy dark energy, whether it's an inflaton or something else, we have a story about how that initial configuration evolves into the present universe. And you may have heard this claim that galaxy formation ultimately comes from quantum fluctuations in the early universe. What does that mean? That means that the quantum state during inflation is incredibly simple. And like we already said, it's incredibly low entropy. It's not just approximately smooth, it's exactly smooth. That's the difference. The quantum state of the universe during inflation is basically the vacuum state. It's basically as simple as it could possibly be.
0:59:00.8 Sean Carroll: So, where does all this initial condition data from which the later galaxies and planets arise, where does it come from? The answer is, branching and decoherence. The universe essentially observes itself. If you want to split the universe into sort of an environment part and a system part, maybe the system part is the large-scale fluctuations in density, the environment part is the small-scale configuration of individual photons and things like that. The initial wave function of the universe during inflation is kind of like a simple harmonic oscillator. It's a vacuum state. It's perfectly featureless. It's simple and so forth. And then after reheating, when you turn all that inflation energy into hot, dense matter, and part of that matter acts like an environment and part acts like the system that has density fluctuations and things like that, you branch the wave function of the universe. And what you're doing is you're branching an initially simple overall wave function of the universe into a combination of branches. And in each branch, things look complex and specific.
1:00:13.8 Sean Carroll: So, this sounds a little bit kind of wild and new agey, but if you hang around cosmologists and they ever show you the picture of the cosmic microwave background. If you look at the image that you get from the Planck satellite or the WMAP satellite of the cosmic microwave background, density fluctuations in the early universe, this is data. This is really what the universe looked like a few hundred thousand years after the Big Bang. Very tiny fluctuations in temperature, one part in 10 to the five from point to point. And it looks kind of random. It looks like some blotches of little hot spot, cold spot, et cetera. And it is random statistically. We can be very, very specific about the power spectrum, the probability distribution of fluctuations. So, that map that you're seeing of density fluctuations or temperature fluctuations in the cosmic microwave background, we think is one particular realization of a random process. And in this language of wave functions and branching, what that means is it's one particular observation measurement of the initially featureless wave function of the universe. So, the early universe was truly simple in this picture, and we are only living on one branch of it that looks potentially complex.
1:01:39.3 Sean Carroll: So quantum mechanics and the splitting of the universe into branches by a decoherence plays a crucially important role in this story of complexogenesis. We're still thinking about complexity so far in this very lowbrow sense, very simplistic way, just like how complex does a system look? We're not worried about functions or interdependent motions of subsystems or much less goals or adaptations or anything like that. But we're not done thinking about this sort of dumb version of complexity yet. So we said that it seems sensible and true that a closed system will start in low entropy and the entry will go up until it hits thermal equilibrium. It starts simple, then complexity can develop, and then the complexity will eventually go away. You will look simple again at the end of the day. This is true whether you are a cup of coffee with cream or whether you are the whole universe. So there's some robustness. But it's not inevitable. So think about the cup of coffee again. Think about not stirring the cream into the coffee. Think about just letting it sit there. You're not at zero temperature. It's not frozen, your cup of coffee,
1:02:52.8 Sean Carroll: So individual coffee molecules and cream molecules will move around. They'll have their random thermal motions. If you waited long enough, and it might be a long time to wait, but we have time, the cream and the coffee will mix into each other. They will gradually diffuse into each other. Unlike if you stir it with a spoon, there could be a sort of very gradual kind of simple transition from all the cream on top and all the coffee on the bottom to everything being mixed together. If you then plotted on a plot, on a graph, if you did some quantification of the complexity of that configuration, which you can do in that case, this is what Scott Aaronson, Lauren Willett, and I did in our paper. We defined a quantity called the apparent complexity of an image, which just says coarse grain the image into cubes or whatever, and then compress it. Ask, what is its algorithmic compressibility? There's this famous notion from Kolmogorov and Chaitin called algorithmic compressibility. It says, given some string, which an image is really a string. You can take an image and just list what the value of the image is in every pixel.
1:04:08.9 Sean Carroll: Given a string, what is the shortest possible computer program that would output that string? So, if you had a simulation, this is what we actually did in our paper. You have a simulation, so you have what we call the coffee automaton. You have a grid, n by n, and you have some white pixels and some black pixels, and they're going to mix together. Just like Laplace's demon, if you kept track of every white pixel and every black pixel, there'd be equal numbers of them, and you just have to tell me where every pixel was. The total amount of information you would give me to specify the microstate of the theory never changes over time. You just have to tell me for every pixel what is its value. But what we did is look at coarse grained versions of that, which we said, okay, we have an n by n grid but we're going to take some chunk of it, 10 by 10 little part of it, and chunk the big n by n grid into 10 by 10 subgrids, and then average what's going on. This is exactly what your eyeballs do when you look at the cream in the coffee.
1:05:13.9 Sean Carroll: You don't see every atom or every molecule. You see a coarse grained version of, "Oh, that's pretty dark, that's pretty light, there must be a certain amount of coffee, a certain amount of cream," et cetera. So then, what this does is it's the difference between a random number and an ordered number. Let's pause to think about this. Pause to think about the concept of apparent complexity, which is what Scott and Lauren and I defined. If you have a number, let's say you have a billion digit number, and the number is a billion zeros all in a row. Is that simple or complex? Well, that's pretty simple. I just gave you the whole number. I just told you zero, zero, zero, zero, zero. By ordinary standards, by the standards of Kolmogorov complexity, I could output that number by writing a little computer program that said print quote "zero" and then do that a billion times, or a quadrillion times, whatever. However long the number is, the computer program to print it out is pretty short.
1:06:15.9 Sean Carroll: Whereas, what if you had just a random number, a random billion-digit number, billion digits of decimal representation of the number? So the number is 3,580,199, et cetera, for a billion different digits. The shortest program that outputs that looks like print, quote, and then the number. You can't get shorter than that. And so it's at least a billion characters long. And if it's a quadrillion-digit number, then it's a quadrillion characters long, et cetera. So that's why the algorithmic complexity of Komolgorov and Chaitin is interesting, because it's asking you the question, is there sort of a computer science-y way of compressing your description of how to output that number? If it's just a random number, then no, there's no way to compress it. You have to just tell me the whole number. But a random number doesn't have any structure to it. It doesn't feel to us like complexity. So, that's why we're defining apparent complexity. So in the case of the digits of the number, let's say you have a billion-digit number, but you don't tell me every number, every digit. Rather, you chunk it into sub-numbers, let's say 100 digits long. And then you take the average of what's going on in those 100 digits. If it's truly a random number, then in the 100 digits, if you take the average of it, it's going to be five or four and a half or whatever, maybe four or five fluctuating up and down.
1:07:51.0 Sean Carroll: But there's much less variation in the average from place to place of a random number. Whereas if there's sort of a structured number that has zeros like 10 times in a row, but then one 10 times in a row, and two 10 times in a row, where there's a little bit of compressibility, then you are able to, even after coarse-graining, then there's an output. Sorry, I should have been more clear about this. I didn't complete the thought with the averaging. If you average over 100 digits and you just get the same average again and again, then the Kolmogorov complexity of that number is small. Because the Kolmogorov complexity of the billion-digit number is big. You have to tell me what every one of the digits are. But if the average of every 100-length digit substring is the same, then it's easy to output that. You can just output the same number again and again. Just like if the whole number is just zero again and again and again. So the apparent complexity of a random number is low. The apparent complexity of a string of a billion zeros is low. The apparent complexity of an intermediate number that has some structure but is not completely random or completely ordered will be high.
1:09:10.4 Sean Carroll: So it's just like the cup of coffee. It's capturing what we want to do. Anyway, this was a slightly technical discourse from the discursion. That's not the word I'm looking for. Detour, I guess, from what I'm trying to talk about here. The point is, we do have, in the case of the coffee or the automaton, a very quantitative way of telling you what the value of the complexity is. There's one little footnote there. And this is what we can do in a solo podcast that I wouldn't have time to do in a seminar or something like that or a public lecture, I guess. In a seminar, I better do this. There is a footnote to Komolgorov complexity, which is that it is uncomputable, which is a problem. Komolgorov complexity is uncomputable for cool computer science reasons that go back to the halting problem of Alan Turing and his friends. You might think that for some string of numbers, you want to calculate its Komolgorov complexity. So you say, what is the shortest program that will output it? Well, I don't know. But if I have a certain well-defined programming language, I could simply cycle through all the computer programs.
1:10:24.6 Sean Carroll: I could make the shortest program and the next shortest program and try all of them until I get to one that outputs this number. That would be my algorithm for finding the shortest program to output this number. The problem with that algorithm is it is doomed to fail because of what is called the halting problem. In computer science, famously, there is no general purpose way of looking at a computer program and telling me whether it will ever halt or not. So, if I try to just cycle through all of the computer programs, starting with the shortest one and letting them get longer, I might hit ones that seem to go for a long time and they're not stopping. And I might want to say, uh-oh, is this just an endless loop that I'm stuck in in this computer program? In which case, I could stop it, abort the process, and start a new computer program. But in general, I never know. Maybe it's just taking me a really long time. One of the features of Komolgorov complexity is it depends on the length of the computer program, but not on the amount of time that it runs.
1:11:24.1 Sean Carroll: So this makes, from a formal, strictly mathematical perspective, it makes the Komolgorov complexity uncomputable. It is, however, estimatable in a large fraction of interesting cases. That's why, in practice, you can have efficient, useful compression algorithms. And so rather than actually calculating the Komolgorov complexity of a coarse-grained image to define apparent complexity, we just compressed it. We used gzip, and then we tested that the specific compression algorithm that we used didn't really matter. You get the same answer in all the different cases. Okay. Good. All of which is to say, in the simple-minded case of just the grid of squares being yes or no or zero or one or white or black or whatever, given a coarse graining and given a compression algorithm, we can calculate the apparent complexity. And given a dynamical scheme for letting that algorithm run over time and having the coffee molecules and the cream molecules mix in with each other, we could plot the growth and then eventual decay of complexity. And the prediction is, the complexity starts low, it goes up, and then it goes back down again. But like I said, you can imagine dynamics for the cream in the coffee where it doesn't go up.
1:12:44.3 Sean Carroll: If you just have cream and coffee molecules mixing in with each other, like diffusing independently, with no spoon stirring it or anything like that, then maybe it doesn't ever look very complex. And this is the reason why we have to revise our paper that we initially submitted because there was a bug in our way of measuring the complexity of these images. And we fixed it, but we didn't revise the paper yet. So that's upcoming, I promise. One of the reasons why I keep talking about it in public is to guilt myself into actually doing the work of revising the paper and putting it back in the revised version online. Here's the trick. You can come up in the coffee automaton where it's just a bunch of white squares and black squares and some update rule that says how do the white and black squares move throughout the grid. You can come up with different rules. One rule might be that for every two nearest neighbors, there's a percentage chance per time step that they will interchange with each other. If it's two black squares, the interchange doesn't make any difference. But if they're white and black and they interchange, then it does make a difference to the overall image.
1:13:53.9 Sean Carroll: And you can just run that algorithm. And guess what? You never get any complexity in that particular algorithm. That is kind of like the particles diffusing in the real cream and coffee with each other. It's just smooth and featureless and it never looks complex. We were eventually able, and this will appear in the revised version, we came up with a different algorithm, which we call the tectonic model, which says the following, that rather than individual squares in the grid, you look at two nearest neighbors and ask if there's a percentage chance that they will interchange with each other. You look at a finite-sized block of the grid and you randomly choose both the size of the block and the orientation. Is it horizontal or vertical? And then you randomly move it to the left or to the right. We're careful that there are rules that you keep the total number of white and black squares constant. Basically, you put it on a torus. And trust me, we did all those things correct. We finally got it right. And the difference there, between the sort of individual nearest neighbor interactions versus the tectonic model, where you have large-scale coherent interaction, turns out to be completely important.
1:15:07.8 Sean Carroll: The upshot is that the nearest neighbor interaction model doesn't ever become complex. The complexity always stays low as entropy goes from zero to maximum. The tectonic model, where there are these coherent motions, large-scale agreement between different pixels about what they're doing, that's when you get complexity. That is much more analogous to like putting the spoon in there and stirring it. Even though there's no external spoon coming in, it's still internal dynamics to the coffee cup, but it's sort of coherent dynamics. There's large-scale effects. To us, that's extremely provocative, that result. This is saying that some kinds of dynamics are going to make complexity come into existence, some kinds are not. The existence of sort of long-range forces or long-range coherence seems to be playing a role. Again, we don't have the once-and-for-all list of here are the qualities you need in your system, the properties in order to get complexity to develop, but this is a clue. This is a hint. This is pushing us in a certain direction. I suspect that there's some mathematical result that says that complexity must be low at low entropy because there's just not a lot of different things the system can do, and complexity must be low at high entropy because everything is in equilibrium and pretty smooth, and complexity is allowed to be high at medium entropy, but whether or not it actually achieves that large complexity along the way from low entropy to high entropy depends on the details, and that's what we would like to better understand.
1:16:43.4 Sean Carroll: Okay. So clearly, the universe does this. The universe has large-scale coherent motions. That's what gravity does. Gravity is large-scale and coherent, like the tectonic model in the coffee cup. So, we're seeing like a little bit of a hint of maybe some answers to the question, what are the properties you need in the laws of physics in order for complexity to arise? Even in this, again, the very most simple system and the very most simple definition of complexity that you may have. So that's the result that we have in hand. What I've been trying to do for a while and I'm still working on is pushing beyond that result to be more realistic. To think really about statistical mechanics and different forces and entropy and quantifiable formulas, and we have a bunch of ideas and none of them is quite ready for prime time yet, so I'm not going to be laying any of those ideas on you, but I'm going to sort of give you some hints in the remaining time about how I see all of this big picture fitting together.
1:17:50.0 Sean Carroll: So basically, here's what I see. I think it's a story of information. I haven't really used the word information that much yet. I've used it in the sense of saying how much information you need to give me to specify some configuration or something like that in the definition of complexity. But information is another one of these words, like entropy or complexity, that has different definitions and appears in different avatars or different guises. And we have to be very, very clear on what we mean. In particular, let me explain something that sadly never, not never, but rarely gets explained and is very confusing. What is the relationship between entropy and information? And the reason why I say it as explicitly as this is because a computer scientist and a physicist will give you opposite answers to this question. They both mean true things, but slightly different things.
1:18:48.2 Sean Carroll: So computer scientists or even, I should say, maybe engineers, communications people, they hearken back to Claude Shannon. Claude Shannon is the founder of information theory, and he had a very specific question in mind. He was working at Bell Labs. He was interested in knowing what is the best way to send signals across the transatlantic cable in a way that would convey information while being robust against noise. If you try to convey a signal across large distances, you might get noise in there. It might degrade because of random fluctuations that you don't have any control over. And Shannon invented formulas for the information content of a message. And he realized, and there's stories about this where he was talking to John von Neumann, et cetera. The formulas look mathematically just like the formulas for entropy, from statistical mechanics, in the following sense and for the following reason. What Shannon realized is if I'm sending a signal with a certain number of bits, and I want to maximize the information content in that signal, what do I have to do? Well, let's say the signal is just zeros and ones. What I'm interested in is learning something I didn't know before. That's what information really means.
1:20:05.2 Sean Carroll: If I already knew it, if you tell me a fact that I already knew, I don't really gain a lot of information. So, the example I like to use, if someone says, the sun rose in the east this morning, and they are very reliable, you believe what they say. Okay. So they set a sentence to you but you didn't really learn a lot of information, you already knew that the sun rose in the east basically every day. That was already your expectation. But if that same reliable person tells you the sun rose in the west this morning, and you think they are reliable, and they're not joking with you, and they didn't make a mistake, suddenly you've learned a lot. You've learned a lot because even though the message was just as long, the word east and the word west are exactly as long as each other, it conveys much more information because it is so surprising. And so Shannon worked out that if you want to convey the most information in a message, what you want to do is make every bit or every word or every part of the message as surprising as it can be.
1:21:07.8 Sean Carroll: Now, they can't be perfectly surprising. You know you're going to get something. So what he said is, imagine you have a frequency of getting different signals, or if you want to put it this way, a probability distribution. So if you're getting zeros and ones, what's the probability the next digit is going to be a zero or the next digit is going to be a one? And what he realized is that to maximize the information content, you want that probability to be uniform. You want it to be maximally spread out. Like in the English language, if you're just getting letters one by one in a telegram message, when you get the letter Q, usually the letter U is going to follow it. When you get a word beginning with TH, maybe it'll be the, it might need not be the, there's plenty of words beginning with TH, but that's the most common word, and therefore you're learning less from getting that signal than you would if every little word or every little symbol in your message was equally probable. And he quantified this, and he quantified it in a way that led him to a formula, which was exactly the formula for entropy that Boltzmann and Gibbs and their friends came up with.
1:22:16.8 Sean Carroll: And the way it works is, high entropy is high information. High entropy in the statistical mechanical sense is everything spread out, everything's equally likely, a uniform probability distribution. If you're low entropy, then your probability distribution is localized and peaked on some certain set of configurations. And if you take your messages from a highly localized, highly peaked probability distribution, none of them are surprising. You're not learning a lot. So in information theory, as Claude Shannon thinks about it, high entropy means high information, because high entropy means a uniform probability distribution. That means every little bit of your message is meaningful. It contains some non-trivial information. It's not just redundant or predictable from the start. Okay. Communication theory, high entropy equals high information. Physics has a very different point of view. In fact, the version of entropy that we've been talking about, the one that Boltzmann had, remember, there's different versions of entropy, just like different versions of information on complexity. So, Boltzmann's definition of entropy is the logarithm of how many microstates are in your macrostate? Where your macrostate is the set of microstates that macroscopically look the same to you. High entropy means there are many, many microstates in your macrostate. Low entropy means that there are very few microstates in your macrostate.
1:23:44.9 Sean Carroll: So let's ask the question, If I tell you what macrostate I'm in, how much did I just tell you about the microstate? How much information is there about the specific microstate in the macrostate information? Well, the answer is, if you're in a high entropy macrostate, and I tell you you're in a high entropy macrostate, you know very little about the microstate, because by definition of entropy, there are many, many, many microstates that could have been in that high entropy macrostate. Whereas, if I tell you that you're in some specific low entropy macrostate, there aren't that many microstates in that low entropy macrostate, so you have learned a lot. You have gained a lot of information by being told that your physical system is in a low entropy macrostate. So to physicists, or at least physicists in this sort of Boltzmannian mode, low entropy means high information, and high entropy means low information. So the communication theorists or information theorists think that information and entropy are in the same direction. Physicists, statistical mechanics physicists anyway, thinking in Boltzmann's way, think that information and entropy are opposite to each other.
1:25:09.1 Sean Carroll: And I'm giving you that because why not? I might as well help clarify something that you might get confused about. But we're being physicists now. We're being Boltzmannian statistical mechanics now. So we're using a language where a low entropy state contains a lot of information. To be slightly more specific about that, to tell you that the system, whether it's the universe or the coffee cup or a box of gas or whatever, is in some specific low entropy state is to convey to you a great amount of information about its possible microstates. It's very, very constraining, very, very specific. You've learned a lot. And the reason why we want to talk that way is because we know that the early universe was in a very, very specific low entropy state. So that fact, telling you the state of the early universe, conveys an enormous amount of information. And we know that as the universe evolves and time goes on, entropy will go up. So we go to a state where just knowing the macrostate of the universe has almost no information in it. In other words, the available information, let's define the available information as the difference between the maximum entropy the state could be in, in this Boltzmannian sense, and the actual entropy that it has right now.
1:26:36.9 Sean Carroll: So if it's in maximum entropy, the available information is zero. If it's in a low entropy state, very, very low compared to maximum, the available information is basically equal to the maximum entropy. And in that conception, what is happening in the evolution of the universe is that there is a resource that we have to use, that we have in the sense that we're able to use, a resource of available information. I'm saying it this way because usually in physics we would talk about free energy. Free energy is sort of the ordered amount of energy. This is what Schrodinger would have talked about when he wrote "What is Life?" He talked about negentropy. It's kind of a silly word that I don't like to use. The entropy is never negative, he just meant the difference between the maximum entropy and the actual entropy, the amount downward you go from the maximum to where you are, which I'm calling the available information. He was not thinking about information theory at the time. But if you multiply that entropy by the temperature, you get an energy. And basically the entropy times the temperature is the useless energy. And the difference between T times S and the total energy is the useful energy, the free energy, the energy that we can use to do work. And what Schrodinger's point was when he wrote "What is Life?" is that a living creature uses free energy from the sun or from wherever to metabolize and to self-repair and to learn and to do things. And that is the characteristic of what life is.
1:28:22.4 Sean Carroll: Life is not the only system that does that. A fire does that. You have some wood and you light it on fire. It is converting free energy into useless high entropy dissipated energy. So it's easy to do. But a living creature will do that kind of thing in an orderly, constructive way. This is all very vague because the definition of life is all very vague, but you get the point. As you burn something or metabolize your food or mix cream into coffee, you're losing, you're using up a resource. That resource is the available information provided by the difference between the thermal equilibrium entropy maximum, maximum entropy, and the actual entropy that your system is in. The more you burn and dissipate, the more you increase the entropy, the less available information you have. If you're in a thermal system with a temperature, you can convert that to a free energy way of thinking. But I think the information theory way of thinking is actually more general, more robust. Sometimes there's no heat bath that you're in. Sometimes you want to think more generally than that. So, forget about temperature. Just think about entropy and information and define the available information to be that difference. Maximum entropy versus actual entropy, we use that up over time. Why am I dwelling on this? Because I think that as we go past the simple-minded apparent complexity of the coffee cup or the large-scale structure of the universe, to think about more sophisticated versions of complexity, what's really going on is subsystems of the universe coming up with better ways to take advantage of that resource, of the available information resource. We are using it up. We are chewing our cud and we are sweating and we're doing global warming and we're basically increasing the entropy of the universe, mixing cream into our coffee. Sometimes we're using that power for good.
1:30:27.1 Sean Carroll: We're using that resource to do something interesting and complex. So my hypothesis is that the story of complexogenesis is going to be a story of stages. We do things in more and more sophisticated ways that we recognize as more and more complex. So let me put a little bit more meat on those bones, but honestly not that much meat. We're still, we're going to be a little vague here because I think that the understanding actually is vague. The story we told about the coffee and the cream, the apparent complexity is a very lowbrow version of complexity. There is information, you are using it up as you are increasing the entropy of your coffee and cream system, but you're using it up in kind of a dumb, simple-minded way. The next level where you use that information a little bit more cleverly is maybe what we could call it is metastable complexity. And here I think that the typical physicist wouldn't even be thinking about information, but I think that maybe it's part of the bigger picture, information is the right way to think about it. I'm thinking about the difference that we already mentioned between a planet and a star.
1:31:42.1 Sean Carroll: So, planets and stars are both more or less stable configurations of matter over billion year timescales. Eventually stars die, they don't last forever, but over a very long time, stars just sitting there looking more or less the same from moment to moment, and a planet is just sitting there looking more or less the same from moment to moment, but for very different reasons. The planet is not using up any resource. It's just mechanically stable. It's a kind of a simple-minded brute way of being more or less the same from moment to moment. The star is using a resource. It's stable because it has fuel inside in the form of either protons or some other light nuclei that can undergo nuclear fusion into heavier nuclei through various processes that astronomers love to talk about. But there's only a finite amount of fuel. If you turn all your protons into helium, you've used up your protons. Eventually you'll turn them into iron or something like that, but maybe that's much later on. And then you stop being stable, and then the star either collapses or goes supernova or whatever.
1:32:58.7 Sean Carroll: So this is, again, not a very sophisticated use of information but it is an example of maintaining stability by using up some fuel. Stars do that, planets do not. As an aside, galaxies are sort of an interesting special case, like they're an intermediate case. Galaxies are not exactly stable. Like a star, galaxies do evolve over time. But if you do the details, so galaxies will not last forever. Galaxies are not maximum entropy configurations, and the reason they're not is because you can always increase the entropy of a galaxy by through the various gravitational interactions of the stars. Let's forget about stars evolving and exploding, whatever. Think of stars as point masses that last forever, and just think about a gravitating system with a bunch of point masses in it. You might think that you find some stable configuration. If you only had two stars and you were completely Newtonian in your gravity, then you could be completely stable. They would just go in ellipses around each other. There are even certain very special cases of three-body systems in Newtonian gravity that can last forever. But the generic case, when you have many objects in Newtonian gravity that are just point masses, is what you can do is have interactions between the different masses such that one of them gets flung out of the system. It escapes. It reaches escape velocity just through its gravitationally bumping into other stars, and it flings outward, and the rest of the system contracts a little bit.
1:34:42.1 Sean Carroll: So the overall total energy of the system remains constant but the entropy goes up through the contraction of most of the stars. This is literally what happens when you form a galaxy or whatever. Those galaxies are going to continue to contract over time by spitting out stars. So they're not perfectly stable, and the entropy is increasing. But when you run the numbers, that process is very slow. So galaxies can look stable for a long time without really increasing in entropy, even though it's not really mechanical stability like the Earth's, it's not really like the stars are pressing up against each other, it's just that the gravity is a weak force and the time it takes for the entropy to increase is very slow. This is just a reminder that physics is complicated and the universe is very complicated. Okay. Anyway, the point is that the existence of this resource, this decrement between the maximum entropy we can have and the actual entropy we do have, makes a way for certain systems to be metastable, like stars, namely to use up fuel, and to use that fuel to maintain some steady state of some sort, a non-equilibrium steady state configuration.
1:36:00.5 Sean Carroll: We mentioned this idea when talking to Addy Pross in the context of life. He has kinetic stability, dynamical kinetic stability, which is a slightly different thing that a chemist would care about, but this is like the physics version of that. Okay. Still, stars are not very complex. It's not very exciting. If that's as complex as you could get, it would not be worth writing home about. There's other systems like, think about the atmosphere of Jupiter. I'm not sure if this is the best example, but it's an example. If you take these pictures of Jupiter, it's gorgeous. Like there's all this stuff going on, all these different colors, the great red spot is there, other spots sort of come and go. The great red spot survives for a long time. That seems more complex than a star. It's still not clever. It's still not goal-oriented or adaptive or anything like that, but there's clearly substructure that persists for some reason.
1:37:00.5 Sean Carroll: And I can guarantee you, even though I haven't gone through the calculations, it wouldn't persist if there were not an input of low entropy energy in there. There's some dynamics going on. The atmosphere of Jupiter is not exactly the same from moment to moment. It's not frozen like the topography of the moon or something like that. It's dynamic, but it's clearly also not just maximum entropy all by itself. There are subsystems. There are components that are interacting with each other. We're beginning to see, even though it's, again, nothing like a living organism, but we're beginning to see something that we would more viscerally recognize as complexity, this sort of modular kind of thing where things break into subsystems, each of which plays a different role. And again, dependent on this resource that we're using up. If the sun disappeared and the Jupiter, you know, has radioactive elements inside that heat it up, and so if all that energy production mechanism, low entropy energy production mechanisms ceased, you would get rid of all the structure in the atmosphere of Jupiter. So it's a temporary, somewhat steady state, persistent but not forever kind of configuration that is dependent on this resource that we're using up. So, clearly the big leap is from this sort of structured complexity, but the kind of dumb complexity that you would see in the atmosphere of Jupiter, to something more like a living being.
1:38:33.5 Sean Carroll: The origin of life. I'm not here to tell you how life began. That's an open question. And we don't know all the details about that. But clearly it's sort of very intuitive that living beings are more complex than non-living beings. And how can we think about that? Again, I think it is a matter of a living being is able to take advantage of its information-rich environment, its low entropy environment, to persist. Schrodinger's definition of a living organism, somewhat tongue-in-cheek, but he had something going on in his mind, was something that kept on moving long after it should have stopped. What he has in mind is if you put a rock into a bowl of water, the rock would just float to the bottom or fall to the bottom and then just sit there. It won't move. If you put a goldfish in a bowl of water, it will move around for quite a while, as long as it has food, as long as it has that information resource that it can take advantage of, that can maintain its structural integrity and keep moving around. And that's what makes it living. It's using that resource to maintain its stability. If the food supply gets cut off, the goldfish dies and it becomes more like the rock.
1:39:54.0 Sean Carroll: So I don't know... I'm sure there are many, many people out there in the world who just have a much more sophisticated view of this than I do, because I don't know a lot about biology. But if we talk to Chris Kempis or Addy Pross or Michael Wong or any of those people who we recently talked to about this stuff, they would have a picture of how living beings take in information and use it to survive. The one example I know of, it's kind of amusing because it's sort of an example of making a prediction, even though it's a prediction in the sense that I didn't know what the answer was, the rest of the world knew what the answer was. There's this very well-known phenomenon called chemotaxis. If you put a bacterium in a Petri dish with more nutrients on one side than on the other side of the dish, so there's a gradient, lots of nutrients on one side, very few nutrients on the other, the bacteria, even though they sort of from moment to moment look like they're thrashing around, will in general move toward the direction of more nutrients.
1:40:51.0 Sean Carroll: Somehow they're smart enough to know that they will be happier, they will live longer if they're in the more nutrient-rich environment. Now, that by itself doesn't seem that complex. After all, if I put a ball on a hill, the ball will roll down the hill, and that doesn't require a lot of information and knowledge or anything like that. It's just the ball is instantaneously responding to the forces that are being exerted on it. How is that any different from what the bacterium is doing? So I thought about it, and I thought, if this picture of information as a resource is right, and this picture of life as a more sophisticated complex system than a rock is right, then maybe the bacterium is not simply mindlessly responding to the gradient in nutrients and moving in that direction. Maybe there is something interior to the bacterium that is keeping track. This is the next level of complexity, over and above just modularity, but actually literally keeping a record that is informationally rich.
1:41:59.5 Sean Carroll: So, in information theory terms, you would have mutual information between the state of the bacterium's interior and the state of the nutrient gradient in the outside world. It turns out this is right. I thought about this, and I looked it up, and indeed, there are proteins inside the bacteria that basically keep track of the direction in which the nutrients are bigger. So yeah. The bacterium really is directly and manifestly taking advantage of that informational resource that it has to use. If the interior of the bacterium were in thermal equilibrium or maximum entropy, it would not be able to keep track of what the exterior environment is doing. If you remember the podcast interview with Christoph Adami, he made a big deal about information theory and life, and the way that he puts it is that the whole genome of a living being has a very high mutual information with the whole environment it's in, in the sense that the genome of the living being is selected to survive in that environment. So there's a lot of relationship between what the genome is. A genome that makes you able to survive underwater is very different than a genome that makes you able to survive on land.
1:43:24.9 Sean Carroll: So, that can be quantified using information theory, and I think it all fits very nicely into this picture. Of course, beyond just being a living being, there's the next stage that you would like to talk about, which is a thinking living being. And you know, look, you can go on about this in great detail, but now we're into the realm of biology. We're in the realm, not that biology is not interesting, but there's a bunch of people studying it already. The physicists have very little to offer here compared to the real biologists who know what they're doing. There is a famous book that caused a lot of impact, and people still talk about it. Impact can be both positive and negative, or at least it can be controversial as well as praiseworthy. But there's a book called "The Major Evolutionary Transitions." I have trouble pronouncing this person's name, but it's by Eors Szathmary. Szathmary, S-Z-A-T-H-M-A-R-Y, I presume Hungarian name, and John Maynard Smith. So these are both working evolutionary biologists. "The Major Evolutionary Transitions." And what they tried to do was to pinpoint moments in the history of evolution where not just you have a new species or whatever, but you have a new kind of mode of living.
1:44:44.1 Sean Carroll: And interestingly, what they ended up concluding is that the way to think about these evolutionary transitions, like from prokaryotes to eukaryotes, or from single-celled organisms to multicellular organisms, then all the way up to like language use and things like that. The common thread that they identified was the use of information. Really the transmission of information, which is a little bit different, but it's still along the same general lines. I've learned recently actually from talking to David Krakauer here at SFI that there are real biologists who are skeptical or poo-poo the book because a lot of the pinpointing of major evolutionary transitions by Szathmary and Smith was based on feelings. It was not quite as rigorous and quantitative as you might like. But okay. To me that's not a criticism. That just means they're making a hypothesis and now we've got to think about it. So, the common idea is that once you get into life, you have an enormous number of things that you can do. The space of possibilities is much bigger and a lot of that space consists of how can we use the information resource that we have to survive in this difficult world that we're in.
1:46:05.0 Sean Carroll: One such thing, I'm mentioning a lot of former Mindscape guests, but Malcolm MacIver was a Mindscape guest in the early days and he talked about how there was a transition when fish climbed onto land. When you're a fish, when you live under the water, you're swimming around and the attenuation length of light in water is rather short. It's meters, roughly speaking, and you're swimming in meters per second, so you can't see that far in front of you. When you see something, you don't have that much time to react to it. You better either decide right away whether it's friend or foe or food. Whereas when you climb onto land, now you can see essentially forever. You can see things that are far away and therefore a new mode of information use, information processing, opens up to you. When you are a fish, the only evolutionarily useful mode of information processing is you see something and you react to it. When you're on land, you can see something and you can think about it. You can plan. You can imagine different hypothetical scenarios and sit and contemplate which one of those will be the best. Should I run up on that tree? Should I hide behind the rock? Should I attack this thing? Whatever you want to do. That costs resources to think about things. Thinking in the brain is an energetically costly thing, but if it gives you a survival advantage, evolution will eventually find it. That is an even more sophisticated version of using the information resource that we have that gives rise to a level of complexity that the little bacterium can only imagine or could not even imagine because it doesn't have the capacity for imagining.
1:47:51.7 Sean Carroll: So, all of this is to say, I think you can see the vague outline of a picture of stages of increasing complexity in the physical world characterized by increasingly sophisticated ways of using the information resource that we have have around us. What you would like to do is turn that vague picture into a more quantitative one by coming up with what a physicist would call an order parameter. An order parameter is just a number you can compute that sort of characterizes a phase transition. Like for liquid water transforming into steam or to ice, the speed of sound changes. The equation of state changes dramatically at those transitions. So you can have an order parameter that tells you, yes, you have had a phase transition. What you would like to do is use information theory. It's what we're trying to do, me and my friends, use information theory to come up with order parameters that characterize these different stages of complexity.
1:48:57.5 Sean Carroll: And there's no guarantee that it's straightforward. Maybe some things happen in some physical systems. Some ways of using information happen earlier. Stage one happens before stage two in one system, and stage two happens before stage one in another system. It could be a mess. That's the whole joy of complexity is that it's not necessarily simple. It could actually be complex. Okay. So I could stop there. That's my picture of the way that we should think about complexogenesis in the universe and its relationship to information as the difference between maximum entropy and actual entropy. But there's just one other kind of thing that I think is really interesting. Back to the more basic question of what are the features of the laws of physics that allow this to happen in the first place? We can kind of sketch out how it does happen. If the laws of physics were different, then would it still be able to happen? Or is there some feature of the known laws of physics that we can pinpoint that you say, oh, even if I didn't know what the universe was like, if I knew that these were the laws of physics, then it would happen. This is the kind of question that only physicists and maybe philosophers would care about. Biologists don't care about this. They think they know what the laws of physics are, so they can just plug them in. But that's okay. We can be physicists. And we can ask, what if the laws of physics had been different? What is important here? I mentioned from our investigation in the coffee cup that it seems as if the existence of long range forces is an important feature. But there's another feature that I'm becoming increasingly convinced is super important, which is the existence of photons or something very, very much like photons. What do I mean by that? Well, think of a box of gas. I love thinking of boxes of gas.
1:50:50.9 Sean Carroll: I think of different molecules. Think about a box of gas that has sort of two different kinds of molecules. We're abstracting away from real physics here to just do thought experiments. Think of red molecules and blue molecules. And they're bouncing around in a box of gas because it's a gas. So they're moving around. The space of all possible configurations for that system is big. There's a lot of different configurations that the molecules can be in, even at a fixed energy or whatever. There's not a lot of interesting, if it's in a high entropy state, not a lot of interesting information usage in that context. You might need a lot of information to tell me where every molecule is, but it'll be a different configuration in a second. There's nothing stable and interesting. There's no information processing in any interesting way. But imagine that you didn't only have these atoms. You also had the ability to have chemistry, these molecules, I guess I was calling them. In other words, maybe a blue atom and a red atom or two blue atoms or two red atoms could bump into each other and stick to make a molecule.
1:52:03.2 Sean Carroll: I should have called them atoms in the first place because I'm going to call them molecules now. If I were a chemist by trade, I would call them monomers for the individual pieces and polymers for when I'm making a big thing. So, I'm imagining that my little monomers, my little individual particles, have the ability to stick together. And if they come in two forms, red and blue, or zero and one, whatever you want to do, then it's interesting what happens, not to be too provocative about it, but there is an analog to digital phase transition. In the phase where all the monomers, the particles are just bouncing around, that is analog. They could be anywhere they want. There's an infinite number of places they could be. Once they stick together, there's a bit of digital information that comes to life, namely the order. Is it red, blue, blue, blue, red, blue? Or is it red, blue, red, red, blue? There's some storage of information. And of course, that ability to store information in a relatively reliable way is crucial to what we imagine we just were babbling about as higher level ways of using information. It doesn't help if the information is just out there, but you can't store it and use it. And this ability to sort of have a digital version of your configuration of particles is crucial to being able to do that.
1:53:29.7 Sean Carroll: Now, if you are a physicist by training, and you are thinking about the space full of possible laws of physics, as we're doing right now, there's a very crucial thing about that idea that two particles or two atoms or two monomers will stick together. Namely, generically, that does not conserve energy. That is an inelastic collision, if you think back to your early physics courses, the total kinetic energy, the total momentum of the system will be the same before and after they stick, but the total energy will be different. At least the total kinetic energy of the two things that happens. But of course, chemistry happens all the time. In the real world, atoms do stick together. What is going on? What's going on is the state of the atoms when they are stuck together is slightly lower energy than the state where they're freely moving. That information didn't just disappear, it is transferred into photons. The atoms give off a photon that carries away the amount of energy that is the difference between the amount they had before and the amount they had after. And the thing about photons is they're really flexible. A photon is a massless particle. Einstein told us that E equals mc squared. What he means by that is that a massive particle has a minimum amount of energy that it can have. When the particle is just at rest, it has E equals mc squared energy. If it's moving, it has more energy than that. Kinetic energy. But if it's just sitting there, there's a minimum amount of energy it can have given by mc squared.
1:55:02.3 Sean Carroll: A massless particle, the minimum energy it can have is zero. Mc squared is just zero. And it can have any higher amount of energy because it can also have kinetic energy. So a photon, which is a truly massless particle, can have any energy at all as measured in some rest frame. That feature, which we sort of take for granted, is crucially important to allowing chemistry to happen. Because different kinds of atoms with maybe different initial energies will stick together and they will need to be able to give off arbitrarily different amounts of energy in order to conserve energy and stick together. They have a very definite amount of energy they have while stuck together. And because they have different potential velocities before they stuck together, they could have any energy beforehand. So photons, which are particles that can carry away any amount of energy you want, play an absolutely crucial role in this analog to digital transition. They allow for it to happen in a very, very real way. And this idea that the digitalization of the information is, clearly we know examples where this is crucially important. Schrodinger again, in his book, "What is Life?" he famously predicted the existence of something like DNA. His argument was the following, he's a physicist, clearly. So statistical mechanics was one of the things he was very good at. He's thinking about molecules or atoms bumping into each other, and he knows that they can't actually convey a lot of information because they're randomly moving around. If they sort of cooled down and your molecules became a solid, a crystal, let's say, as a specific example of a solid, then you're cooled down but you're still not containing a lot of information.
1:56:57.2 Sean Carroll: Think about a crystal, a crystal of salt or diamond or whatever, is it's just atom after atom after atom. There's no new information contained. If you know that you're in a diamond, you know that if I'm a carbon atom here, next to me, there's going to be another carbon atom. No new information has been conveyed. So Schrodinger said, for a living being to have something like genetics, something like the ability to send its genome down to subsequent generations, it must have a configuration of atoms inside that contains information in a relatively stable form. So it can't be a real crystal because real crystals are just predictable and no information, but it also can't be a gas or a fluid. It has to be what Schrodinger called an aperiodic crystal. That is to say, an arrangement of atoms in the form of a molecule where you don't know what the next grouping is just by knowing what your current grouping is, basically an alphabet or a way of conveying different bits of information at different sites of what he was thinking of as this aperiodic crystal. And of course, now we know it's DNA that does this.
1:58:10.7 Sean Carroll: RNA also does it to some extent, but RNA is less stable. Then there's a whole long story that I'm gonna avoid the temptation to start talking about RNA world and the origin of life and things like that, but it's precisely this analog to digital transition. DNA is very, very much analogous to this sort of red and blue balls sticking together that we were just talking about. You don't know from knowing that a certain nucleotide is one identity what the next one is gonna be, and therefore, in Claude Shannon's sense, the information about the next one carries information in an interesting way. So, you can kind of see the importance of photons here, and there's almost, just for fun, I can imagine an anthropic principle argument for gauge symmetry. So this is wackier, we're getting late in the podcast, but the existence of a massless particle that interacts non-trivially with ordinary matter should not be taken for granted. So, consider the following argument. Intelligent observers are complex information processing systems. That's what we think we are. And the anthropic principle is supposed to say the one thing about the conditions in their universe and the laws of physics underlying them is that they have to be compatible with the existence of intelligence observers, otherwise you wouldn't have any intelligence observers there talking about the laws of physics.
1:59:44.8 Sean Carroll: So, at this level of analysis, what we mean by intelligent observer is some complex information processing system. Now, a complex information processing system is a configuration of matter, which as we just discussed, it won't be created unless you can dissipate extra energy away. If you think about the space of all possible configurations of atoms to make a complex system, at any one energy, if you start with the atoms all moving, it's very, very unlikely that you will find a configuration where the atoms are stuck together with exactly the same energy. That's almost a set of measure zero. It's very, very difficult to find. You can only find these sort of interestingly complex structures by going down in energy, by decreasing the energy of the atoms, by giving away the energy to some other part of the universe. Dissipation, that's exactly what it is. When can dissipation happen? Well, you need low mass particles, massless particles would be the best, because they can have any energy at all and they can carry it away. But neutrinos or something like that are low mass particles.
2:00:57.9 Sean Carroll: They're of no help whatsoever because they don't interact that much. It's very, very hard to get a neutrino to carry away energy. So you want a low mass particle, but one that interacts noticeably. That really has a non-zero chance of carrying away some energy. Now, there's a separate particle physics argument, but basically the only low mass particles that interact noticeably with other particles are gauge bosons, something that we've talked about because volume two of "The Biggest Ideas In The Universe" talks a lot about gauge symmetries and the fact that the existence of a symmetry in the quantum field theory context leads directly to massless particles. Ordinarily, if a particle is massless, it's only because it doesn't interact with anything. Gauge symmetries allow for interactions while keeping particles massless. And therefore, the existence of intelligent observers relies on the existence of gauge symmetries. And so you can make photons. So there you go. I've explained why gauge invariance exists. It's because of the anthropic principle. I wouldn't take this very seriously, but I think it's amusing to think that when we know the laws of physics as well as we do, sometimes it's tempting to take for granted the features that they have.
2:02:14.3 Sean Carroll: But if you imagine different laws of physics, things might've been very different up to and including life might've been impossible. Okay. Let me just wind up with yet another amusing observation. Like we don't have the full picture here. I think that's pretty obvious. We're groping toward this idea of complexity increasing because we have this information resource and therefore there's a lot of different ways that we can put it to use. And the issue is, the homework assignment is, quantify all the ways that physical configuration of matter can use that information resource to survive and argue that it's gonna be more and more complex ways of doing that. Okay. So like I said, I could stop there, but the final observation is the following. Once you do this analog to digital transition, once you have dissipation so that you can take your configuration of matter, it can cool down into a complex information rich configuration of stuff, the space of possibilities that are relevantly different from each other becomes enormously big. This is the crucial thing that I'm still trying to wrap my head around to sort of figure out how to quantify it and say it exactly.
2:03:32.9 Sean Carroll: Even though in the box of gas, there's a huge number of different places and velocities the molecules can have, they're all kind of the same in some intuitive sense that you would like to make quantitative. Once the molecules start sticking together, then they're not the same anymore. Then the different orderings of the monomers into the polymer matter. So, how big is the space of possibilities? When you say like, okay, there's many different ways to put those atoms together to make a DNA or whatever, how efficient could we be? If you think that a DNA molecule carries the information that turns into the blueprint for a big macroscopic organism, and you think the natural selection can be thought of as sort of searching through the space of genomes for things that climb up to peaks in a fitness landscape and can survive in a harsh environment. How good a job can we do in exploring that space of possibilities? Let's make it explicitly the space of genomes. So let's imagine we have nucleotides and we're putting nucleotides into DNA and we're saying we have four different, we have an alphabet which has four letters in it, G, C, T, A, those are the nucleotides that come into making DNA, and we're gonna put them in different orders. And the human genome has approximately three billion base pairs in it, so three billion nucleotides in some specific kind of order. Is that because the universe has searched through all possible DNA strands up to three billion in length and found the perfect one? No.
2:05:16.3 Sean Carroll: So, just to quantify that, how big the space of possibilities are, imagine that we took all of the protons and electrons in the universe, in the observable universe, 10 to the 88th of them, as you now know, imagine we got, let's say, because we don't have the capability to do this, imagine that we put them all into the form of DNA. Indeed, into the form of base pairs for DNA, and then we're gonna assemble them into strands with N base pairs in each strand. And imagine that we do this one billion strands per second. So, we're gonna keep taking all the matter in the universe and we're gonna put it into different combinations of G, C, T, A, and we're gonna keep shuffling through all those combinations a billion times a second.
2:06:07.4 Sean Carroll: And we're gonna somehow magically perceive which one of those make good living organisms and which ones would fail. And how much time do we have within the age of the universe, 10 billion years, how long of a strand of DNA could we search through comprehensively so we really checked every single possibility? The answer is about 180, about 180 base pairs in a DNA molecule. We could, if we had the entire universe devoted to this program of putting all the different base pairs in different orders and asking whether they were good or bad, and we could do that in a billionth of a second, all of which is entirely unrealistic, of course, just so you know. We're just trying to make a point here. The point here is there's not enough matter or time in the universe to search through all the possible configurations of nucleotides in DNA. Even if we did it in this completely unrealistic way, we wouldn't get even to length 200 of our hypothetical genomes here. The human genome is three billion base pairs long. So, we search through the space of possibilities in an enormously inefficient way.
2:07:23.1 Sean Carroll: We can't possibly be comprehensive about it. That's why natural selection uses sort of randomness and culling. You randomly mutate your DNA, and then if you're unsuccessful, you die out. And if you're successful, you reproduce. That's how natural selection works. And you're not going to do anything like a careful examination of all the possible things you can do in a DNA strand that is three billion base pairs long. You're going to explore an enormously tiny fraction of that available landscape of possibilities. So, what that implies to me is that in this process of increasing complexity, increasingly sophisticated ways to use the information resources that we have in the universe, we're nowhere near done. We could easily imagine configurations of matter that are in some, hopefully definable sense, way better at surviving than we are. Because who knows what you could do by organizing these DNA base pairs in better, more efficient strands to make better organisms? Not that I'm advocating doing this. This is the thing about natural selection is it's mindless, it's not teleological, but somehow, I guess this is a good place to end on.
2:08:50.8 Sean Carroll: One of the things that happens in this progress, progress is, of course, the wrong word here, in this progression, this evolution from simplicity to complexity, as these subsystems of the universe become increasingly sophisticated at using the informational resource around them, is that we can do something called imagining the future, as we mentioned with Malcolm MacIver's talk, et cetera. The reason why that's interesting is because physics-wise, the universe is governed by two things. Number one, the laws of physics, and number two, a past boundary condition, the low entropy of the universe in the past, the past hypothesis, as David Albert has dubbed it, the fact that the early universe starts with very low entropy in the past. There is no future hypothesis. There's no future boundary condition. For whatever cosmological reason, our universe has a boundary condition at one end of time but not the other. And so all of the sort of coarse-grained evolution of the universe questions can be addressed by the underlying laws of physics plus an initial condition, not a final condition. There's no, nothing you need to know about the future in order to predict how systems will evolve from the past to the future. There is something you need to know about the past, the low entropy of it.
2:10:18.6 Sean Carroll: There's nothing you need to know about the future. The only condition is in the past. But there are also systems in the universe that have goals. A goal is a future state that you would like to reach. These systems are, of course, living systems. Living systems have goals. As far as we know, non-living systems don't have goals. They might have places they go to, but they're not trying to go to there because they have a purpose in the same way that living systems do. Living systems can envision where they want to be and work toward getting there. Like if you drop a ball and it falls toward the floor, if you were Aristotle, you might casually say the ball wants to be at the floor. It has a nature to be down there. But I could just catch the ball. I could just put my hand there and stop the ball from reaching the floor. If there is a cat that wants to get a mouse and okay, so the cat moves towards the mouse, I could try to step in the way of the cat, and the cat would move around me.
2:11:26.0 Sean Carroll: The cat would change its behavior in order to keep pursuing its goal in a way that the falling ball doesn't because the cat is a living organism that can imagine things in the future. So somehow, somewhere along the progression from very, very simple systems that are just increasing their entropy and evolving along with the universe to more complex information-utilizing systems, the boundary condition that is initially only in the past, these individual subsystems of the universe invent a future boundary condition for themselves, some state in the future that they would like to reach. That is nowhere to be found in the microscopic laws of physics or anything like that. Laplace's demon just works moment by moment. There's no future boundary conditions. But this appearance of complexity, this complexogenesis, this increasingly sophisticated use of information, allows us to have future goals. And I think that's really interesting. I would like to know, I would like to be able to sort of quantify what is the moment at which that happens. I don't think that the bacterium doing chemotaxis really has a future goal. I think it's just responding to the moment in some way.
2:12:46.5 Sean Carroll: But I have future goals. I want to finish this podcast. I want to publish it, et cetera, in very down-to-earth ways, no one argues with that. So somewhere along the evolutionary tree between me and the bacterium, the idea of a goal appears. That has to be one of the stages in complexogenesis. The ability to formulate goals, states of future configuration that you will try to get to, even if I don't know how I'm going to get there, exactly. I don't know what time of day I'm going to write the show notes for the episode or whatever. But I do know that on Monday morning I'm going to publish the episode. That's a fascinating thing that can happen in this information-utilizing capacity that we develop as complex creatures in the aftermath of the Big Bang. Of course, eventually we'll all go away. It doesn't last forever. The cream in the coffee do mix together. The complexity will eventually disappear. But even though the stars are mostly formed, human lifespan is of order 100 years. 100 years is nothing compared to the timescales at which entropy is increasing in the universe and complexity is developing in the universe.
2:13:58.9 Sean Carroll: So, I don't know about the universe as a whole, but I think that unless we do something dumb and kill ourselves, there's a lot of room for increased complexity and increased sophistication in our use of information here on Earth. If that's something that we value, then that's maybe a goal that we can have to keep that going, to keep that surviving and not do dumb things and destroy ourselves here on Earth. That's a good place to stop. So, thanks for listening to Mindscape. I will talk to you next time.
[music]
Pingback: Complexity and unpredictability in the everyday world – No ghost, no machine, only human
It has been 5 years since you talked with Scott Aaronson in your podcast. Do it againe pelease! Thank you Sean.
Whatever started the universe, I’m positive that God did it.👍
Insightful and clear discussion as usual, Sean.
However — and I don’t know if it is a divergence here (it might be simply the way you chose to phrase the closing example) — I think we perhaps should not be so surprised that the cat will manage to dodge us and strive to pursue its goal to catch the mouse. In another podcast from a while ago, you gave interesting examples about the language we can use and the depth we can go by asking further questions. That was the “kid’s inquiry” or something like that. We could ask why the cat would have such a goal, and answer that it is because it is hungry. Then ask why it is hungry, and answer that it hasn’t have a meal. Then ask why it needs a meal, and answer that it needs it to maintain its cellular metabolism working. And keep going, you know.
Thinking as a physicist, which I think it’s the correct way of thinking about these things, in the sense that we don’t need to invoke metaphysical explanations for emergent behaviour, purpose, and goals, the goal of the cat derives from the broader evolutionary context and its entire line of descent, and from the examples you yourself neatly explained, re. matter interacting with itself (box of blue and red atoms, etc). Because of gravity and quantum mechanics, that you know and understand way better than me, I see there’s already some initial information as initial condition defining the interaction among those particles. Talking perhaps in lay and informal terms, that’s the basis of the “sensors’s” part that animate matter (“life”) acquire/inherit and develop (into sophisticated mechanisms) over evolutionary time frames to interact with a progressively more complex ecosystem. Sensorial perceptions are physical in nature, and biochemistry is, among other things (e.g., working to maintain cell metabolism), working constrained by physiscs to generate outputs and reactions to given inputs. The “goals” as those the cat has evolved to possess have merely been fine-tuned as a result of the (already) complex and constantly changing energy (or ecosystem, I tend to think they are fundamentally related) landscape. This makes our purposes and goals also entirely emergent properties of the past system(s) that we inherited, and the current one in which we happen to be existing for the moment. That is, simply said, our goals are contingent on our past histories, genetically and culturally (which also preserves an abstract structure of information resulting from the prolonged existence of societies). I don’t know if I could clarify with words my perspective, but would be happy to hear and/or discuss more about this. Something to think about that may bother some, is to think, does this make our aspirations and projections for the future really “free”, since they may be predetermined by the conditions of the previous states building up in our cells and brains?