168 | Anil Seth on Emergence, Information, and Consciousness

Those of us who think that that the laws of physics underlying everyday life are completely known tend to also think that consciousness is an emergent phenomenon that must be compatible with those laws. To hold such a position in a principled way, it’s important to have a clear understanding of “emergence” and when it happens. Anil Seth is a leading researcher in the neuroscience of consciousness, who has also done foundational work (often in collaboration with Lionel Barnett) on what emergence means. We talk about information theory, entropy, and what they have to do with how things emerge.

Support Mindscape on Patreon.

Anil Seth received his D.Phil in Computer Science and Artificial Intelligence from the University of Sussex. He is currently a professor of cognitive and computational neuroscience at Sussex, as well as co-director of the Sackler Centre for Consciousness Science. He has served as the president of the Psychology Section of the British Science Association, and is Editor-in-Chief of the journal Neuroscience of Consciousness. His new book is Being You: A New Science of Consciousness.

[accordion clicktoclose=”true”][accordion-item tag=”p” state=closed title=”Click to Show Episode Transcript”]Click above to close.

0:00:00.0 Sean Carroll: Hello, everyone. Welcome to The Mindscape Podcast. I’m your host, Sean Carroll. Long time listeners, readers, etcetera, will know that I’m not someone who thinks that consciousness is a separate ontological category out there in the world. Now, we’ve talked about consciousness on the podcast a number of times; David Chalmers, Philip Goff and other people. And a lot of people, including those two, Chalmers and Goff, think that we can’t just explain consciousness as the motion of material stuff in the universe, right? Pure physicalism. We need to have separate categories for mental actions and properties, and so forth. It’s a little bit vague [chuckle] in my mind, what other people want. I don’t want that, okay? So, people like me go around saying, consciousness is emergent from an underlying purely physical structure, and we can go into what that means. It’s not that we know how it emerges. I’m not claiming that, but we know enough about the underlying behavior of the physical stuff that it’s very, very difficult to imagine adding in new stuff that would somehow be responsible for consciousness.

0:01:11.9 SC: And so, the word emergent in that set of claims plays an important role and the people who are skeptical of people like me will often say like, what do you mean by emergence? Like, you’re just… That’s just as magical and wish-hoping as our idea that there’s a separate ontological category and that’s completely fair, right? I mean, we do understand a lot about the underlying stuff, the electrons and protons and neutrons, and the different forces that push them around, that stuff we understand very, very well. To say that at some higher level of description, complicated things turn into consciousness without adding any new ingredients, is a big leap and we would like to understand that better. So, forget about consciousness, it’s really important to understand what you mean by emergence, what is it, when does it happen, under what physical circumstances does a complicated system exhibit emergent behavior?

0:02:11.7 SC: So, today’s guest is Anil Seth who is a leading researcher on consciousness, and in fact, Anil has a new book coming out that I can recommend to you called Being You: A New Science of Consciousness, where he pushes this line, that consciousness is an emergent phenomenon out of the physical stuff. It’s always good when people who you wanna have on the podcast have a new book coming out, then they are very much more likely to say yes when you invite them on the podcast. But even better than that, Anil is someone who thinks very carefully about this idea of emergence. He’s not just saying, hey, yeah. Don’t worry, it’ll emerge. He’s thinking both about consciousness and about emergence for its own sake. And in fact, coincidentally, he and his collaborator Lionel Barnett just came out with a paper called Dynamical Independence: Discovering emerging macroscopic processes in complex dynamical systems.

0:03:01.6 SC: So again, forget about consciousness for the moment, just think about complex systems, and ask yourself under what circumstances do you get emergent behavior? Somehow… We know a lot that we want to have happening when emergence happens, we want to be able to describe systems pretty well on the basis of very, very, very incomplete information. We don’t know all the positions and velocities of all the different atoms that make up you or me, or a cup of coffee or anything like that, and the descriptions that we get of that behavior at the macroscopic level, the emergent descriptions can look and feel very, very different than the underlying descriptions. I personally think this is one of the biggest barriers to people getting what it means to say you have an emergent description, because we tend to think that we are Laplace’s Demon [chuckle] that, sure, I don’t know where all the atoms are but I know where a lot of them are that’s almost as good, right?

0:03:57.4 SC: But we have nowhere close to the information you would need to be Laplace’s Demon, and so what we need to do is understand the relationship between the underlying theory and the emergent theories, and Anil and Lionel Barnett just wrote a paper about exactly that. So, because he wants to promote his new book on consciousness, and I wanted to talk about emergence, I invited him on the podcast and mostly I’ll admit, we talk about emergence, ’cause that’s [chuckle] what I wanted to talk about. So, this is really the first full podcast episode mostly devoted to the topic of emergence and what it is, but we do get into consciousness, because there are similarities between the general theory of emergence and the general theory of consciousness for its own sake. You know, the conscious brain looks at the world and gets a very tiny slice of the pie, right? It doesn’t see all of what’s going on in the outside world, but nevertheless, it constructs a story about it. That’s an amazing thing, maybe there’s some relationship there between what the brain is doing and how we talk about emergence more generally.

0:05:00.1 SC: I don’t know, I’m crossing my fingers, maybe it’s true, this is all very cutting edge stuff. Again, plenty of work here to be done for future generations of smart young people growing up to be scientists and philosophers and thinking hard about this. So, let’s go.

[music]

0:05:31.2 SC: Anil Seth, welcome to The Mindscape Podcast.

0:05:32.9 Anil Seth: Thanks for having me Sean.

0:05:34.9 SC: So, you’re one of the neuroscientists out there who is willing to talk about consciousness, I mean, there’s many neuroscientists who talk about consciousness, but you’re even willing to talk about the more philosophical side of things. And we know… I’ve had people like David Chalmers and Philip Goff on the podcast who think that there is this hard problem, right? This impossibility of explaining the first person perspective of conscious experiences, if we’re just physicalists, materialists, who think that it’s just a collective behavior of atoms in the brain. So, tell us just to set the stage where you come down on these kinds of questions, how do you think about consciousness vis-a-vis the physical stuff of which we are made?

0:06:17.7 AS: I suppose I’m a pretty standard physicalist or a materialist, that I think my starting position anyway, is that consciousness is part of the natural order of things. It’s part of the natural world. Everything that we know closely ties it to the brain, at least in some way, at least the kind of consciousness that I’m interested in explaining, which is the consciousness that we are familiar with in our everyday life. The difference between being awake and aware and having experiences of drinking coffee or watching TV, and falling into a dreamless sleep or going under general anesthesia. These are differences in consciousness that apply to human beings and probably to many other living organisms as well. And they do seem to be closely coupled with something about the brain, the question is what?

0:07:09.4 AS: So now that’s a fairly typical, empirical, physicalist standpoint. I am, in the end, a little agnostic about how consciousness will turn out to be part of our overall story of the universe. The idea of the hard problem, which you mentioned David Chalmers, extremely influential and very articulate in putting this apparent mystery, and the idea that we could explain everything about how the brain works in physical terms, how neurons interact with each other, how they explain all the capabilities and functions of things that brains do. And these functions can be things in the vicinity of consciousness, how perception works, how we pay attention, but to Chalmers, there’s always going to be something left over. Why should any of this physical processing be associated with or identical to the redness of red or the sharpness of a toothache?

0:08:10.0 AS: Why is there anything going on for the system in terms of subjective experience? That’s the hard problem, how does consciousness fit into our physical picture of the universe as a whole? And that’s where you get these kind of menu of metaphysical options.

0:08:26.0 SC: Yeah.

0:08:26.5 AS: You have dualism, that there are two completely separate modes of existence. Then there’s the awkward problem of how they interact. You have panpsychism, which I think is an easy get out to the whole mystery. [chuckle] Just says, well, if you can’t figure it out then we’ll just say… You can just build it in from the ground up and say it’s here, there, and to some extent everywhere. Or just as bad in my view, idealism, that say, well, consciousness is kind of all there is and the problem is not how you get mind from matter, but how you get matter from mind. So, I’m actually… I don’t know the alternate resolution of that. I also think that conscious experiences exist. There’s another camp, which is the sort of strong illusionist camp, which say something like, we’re mistaken about there being a mystery at all. When we think conscious experiences are something special that are hard to fit into the picture of the universe, well, that’s just ’cause we’re misunderstanding in some crucial way what the explanatory target is, but I just prefer to start almost like a practical matter that conscious experiences exist.

0:09:32.6 AS: In fact, I think that’s probably the only thing that I’m really sure of is that I am having conscious experiences. I’m also pretty sure that there is an objective physical reality out there…

0:09:45.4 SC: Good. Come to the right podcast.

0:09:48.2 AS: Consisting of something and the new physicist will know much more about [chuckle] what that is. The problem is, how do we relate the two? And in trying to relate the two, maybe this apparent mystery of the hard problem will evaporate, will dissolve in a similar though not identical way to how the apparent mystery of life eventually evaporated, when people got on with the job of explaining how living systems work. So I call it a bit of tongue in cheek. The real problem of consciousness is to explain why conscious experiences are the way they are in terms of things happening in brains and bodies. And by pursuing that agenda hopefully, though that’s not guaranteed, but hopefully, the big metaphysical hows and whys will become less mysterious.

0:10:33.2 SC: So, I’m… I don’t know anything about consciousness at a detailed level myself other than being an avid user of it, but I do know something about physics and the physical world. So, I have gone on record and even written a paper trying to explain how, whatever consciousness is, whatever is gonna be the ultimate explanation for it, don’t make your first move to change the laws of physics to account for it. Which, fine, I mean, that’s a whole school of thought, but then, so what does account for it? And the word I like to use is emergence, right? How there’s different levels of description and there’s a higher level where we talk about people and consciousness and so forth. So, I had David Chalmers on the podcast, and I used that word, emergence, ’cause I do, and I actually brought it up here because I wanna quote David exactly, I don’t wanna misrepresent him. He said, “Yeah, my view is that emergence is sometimes used as a kind of a magic word to make us feel good about things we don’t understand.” [chuckle] “How do you get this from this? Oh, it’s emergent. But what do you really mean?”

0:11:36.8 SC: So, now to be fair to David, he’s thought a lot about emergence and written about it, but clearly he’s a little bit skeptic, it’s gonna do enough of the work. Are you in the camp that says that we should be able to ultimately some day think about consciousness as an emergent phenomenon?

0:11:52.8 AS: I think emergence used properly and carefully… So I’m with David on this that it’s not to be used as some sort of Elixir or magic source, special source that just re-labels the mystery. You don’t just solve consciousness by replacing it with another mystery, but there is something intuitive about many systems, complex systems, that admit a multiple levels of description. And the brain is a highly complex system, as we know it, composed of 86 billion neurons and a thousand times more connections between them. Something like that, very, very complicated. Yet it gives rise to relatively easily characterisable macroscopic properties, large scale properties, whether that’s a behavior of a whole organism or a mental state or a single perception from having a unified perceptual experience of what’s going on around me right now. There are things that apply to the collective rather than the individual.

0:12:57.9 SC: Yeah.

0:12:58.4 AS: So how do we characterize that relationship? I think, it’s almost trivially true to say that consciousness emerges from neural activity. It’s… The devil is in the detail, what do we mean by that? How does that actually help shed explanatory light on the relationship between the level of description at some lower level, whether it’s neurons or some other level, and the level of description of what’s going on for me as a conscious subject?

0:13:28.3 SC: Is it worth trying to go into the difference between weak and the strong emergence? Is that a difference that you care about?

0:13:35.7 AS: Yeah, I think definitely, and I think from what I’ve read on… When you’ve written about emergence, I think you care about it as well.

0:13:41.7 SC: I do.

0:13:43.3 AS: Which is good, I think we all should, because it’s in these distinctions that emergence transitions from being just another mystery to, I think, something we can get both a theoretical and quantitative grasp on. So this idea, at least as I understand it, and I wonder if you understand it the same way, that strong emergence is the more mysterious idea of emergence, where you might have some macroscopic property that is in principle, not explicable by or reducible to the microscopic components that make it up. And furthermore, that it may exert some sort of downward causal power on these micro-level constituents, affect them in some way.

0:14:26.1 SC: Yeah.

0:14:27.8 AS: That goes beyond the causal interactions unfolding among the micro-level components themselves. This is weird. You know, this is a kind of…

[chuckle]

0:14:38.0 AS: It’s uncomfortably close to magic to talk about emergence this way. It’s unclear how it fits into a physicalist picture of the universe. There’s some philosophers who will claim that it can do, that there’s no real problem with bringing new things in at a higher level to like this. But for me, it’s a little of a dramatic move. I don’t quite know what to make of it, how it would actually work. And I think most tellingly, there aren’t very many good examples where you would be tempted to think that this is happening and very revealingly, one of the only examples that reliably comes up is consciousness.

0:15:15.2 SC: Yes, that’s right.

0:15:15.6 AS: So it’s just… This is this whole [chuckle] reciprocal mystery thing. Now weak emergence is very different. It preserves the intuition that the whole is more than the sum of the parts in some sort of interesting way. So, good that there are many examples. There are things like gliders in John Conway’s Game of Life. The example I like to use is flocking birds, which really nice computer simulations of birds flocking, but I see them most evenings here in Brighton over the ruins of one of our old piers. You have these flocks of Starlings that… Murmurations of Starlings, I don’t know what they’re properly called.

0:15:52.2 SC: Right.

0:15:52.6 AS: That flock together before roosting for the evening, and the flock really does seem to have a life of its own, and it seems very appealing that the behaviour of individual birds within the flock is somehow guided by the flock as an entity, that sort of flying around, remaining part of the flock in some way. But there’s nothing mysterious going on here. They are just birds following local rules, how they fly together, as far as we know. So, you can simulate things purely locally. The birds are behaving local rules, and if you set it up the right way, you get what looks to an external observer like an emergent property, something that’s more than the sum of its parts. And so, the question is, how do you operationalise that? How do you become a bit more specific about what systems display weak emergence and what don’t?

0:16:50.3 SC: Right.

0:16:51.3 AS: And here, I’ve been most influenced by the philosopher Mark Bedau, who describes weak emergence as something for which there is an explanation, a macroscopic property for which there is an explanation in terms of microscopic components, but it’s what he calls incompressible. You can only figure out what the global property is by simulating exhaustively the microscopic interactions. And that’s, I think, quite a nice starting point, but it’s a kind of all or none starting point. So I think that nowadays, and this is something I’ve been interested in for, well, more than a decade now, is, how do we get a little bit more empirical, quantitative, graded about these things? Given a system, can we measure the extent to which a macroscopic property, like a flock or some other property, maybe it’s some global activity pattern in neurons, can we measure the extent to which that is weakly emergent from its constituent parts?

0:18:02.1 SC: So yeah, you’ve sparked a lot of ideas in my brain. I know that you’re the guest on the podcast, but let me just say a couple of things that come to mind when you say that, and you can choose to respond to them or not. First, I think that it’s a terrible choice of vocabulary that we’re stuck with to talk about weak and strong emergence, ’cause they’re almost opposites of each other, right? [chuckle] They’re not two different versions of the same thing. The whole idea of weak emergence is that everything in here is in the microscopic components ultimately, and emerge, in that sense, means you look at the collective behaviour of it. Whereas strong emergence means that when you have this collection, something new appears and the emergence has a totally different kind of meaning. It emerges out of something that is not just the microscopic dynamics by itself. So, being that as it may, maybe that is what is contributing a little bit to the confusion.

0:18:53.0 SC: Having said that, I have thought about it hard, and I do think that it is not insensible to imagine something like strong emergence. The example I would give is an atom or an electron, an elementary particle obeys the laws of physics, and those laws of physics are really, really local, right? They say the electron cares about what is going on in other quantum fields at the point where the electron is nowhere else. But what if the real laws of physics say that that’s pretty good when you have two or three or 10 electrons, but when you have 10 to the 23, it’s not good anymore. There are literally new laws that come in and there’s some feature of the organisation that the electron is stuck in that needs to be taken into consideration. I think that would count as strong emergence and it would also be completely incompatible with everything we know about physics. [chuckle] So, you’re welcome to think about it, but it is something very different.

0:19:49.0 SC: Whereas, just to finish up, maybe the idea of strong emergence does make sense when both your sort of finely grained theory and your coarsely grained macroscopic theory are themselves theories of complex structures. So like with the starlings or the birds flocking, a bird is not an electron, [chuckle] right? So a bird has its own internal structure and its memory and things like that, and so, maybe when you relate those two levels to each other, there is some sense in which strong emergence is a useful concept to lean on, but when you’re relating the brain to the atoms of which it’s made, I don’t see how it can personally make sense.

0:20:33.4 AS: Let me respond to both of those, I think there’s… I kind of agree mostly, although I quite like the weak, strong terminology, because for me, it echoes other domains in which that terminology has been used. And it’s often the case that people are initially attracted to the strong version of whatever phenomenon it is, whether it’s something like strong artificial intelligence, which is supposed to connote genuine intelligence rather than the simulation of it or strong artificial life, similar idea. There’s something about the strong X in which the X possesses some quiddity, some essence…

0:21:12.8 SC: Yep.

0:21:13.6 AS: Of the phenomenon that you’re talking about. But it almost always turns out that in fact, you make more progress by taking [chuckle] a weak stance and thinking, “Okay, how do we simulate, how we understand the mechanisms that exhibit some of the properties that we associate with this phenomenon, but without trying to sort of build it in as a fundamental essence?” There’s an old paper I was very influenced by and I think it came out of… It was one of the original papers in network theory, back from Mark Granovetter called The Strength of Weak Ties. Again, just having this idea that weak things, weak interactions, weakly coupled systems can give you really powerful effects, and for some reason, I quite like that way of thinking, that not trying to do so much can actually lead to making more progress. We see the same thing in consciousness too, actually.

0:22:03.2 AS: This gets back to just where we started, that if you try to solve the hard problem head on and explaining why consciousness is part of the universe, maybe you want to build a system that is artificially conscious, going off a strong artificial consciousness. It’s unclear you’re gonna make much progress because we just don’t know how consciousness fits into our understanding of the universe in general.

0:22:28.4 SC: Yeah.

0:22:29.0 AS: But taking a weak approach and just saying, “Okay, look, consciousness has these properties and let’s try to understand them individually one by one,” you get somewhere, so I do quite like that. As to the other point about whether there are legitimate situations in which to invoke something like strong emergence, I think I don’t… To be honest, I don’t know enough about the relevant domains of physics to know whether that is justifiable, ’cause there’s also another form of emergence which often gets overlooked, which is nominal emergence, which is…

0:23:04.0 SC: I don’t think I even know that one.

0:23:04.3 AS: A form of emergence where you just have a property that can apply to a whole, that just by definition cannot apply to the parts. So the example, I think, that Mark Bedau uses is, a circle is nominally emergent from the set of points that make it up. There’s nothing mysterious going on here, it’s just that a circle is not the kind of property that can ever be attributed to a point, a single point.

0:23:31.3 SC: Yeah.

0:23:31.7 AS: It’s only something that a collection of things can have. So my intuition is that if you combine that with a sufficiently rich version of weak emergence, then you get everything you need. And the key thing for me about this weak emergence picture is the causal closure of the physical world, that you want things to run through all the way down. Of course, there are concepts that we will use to describe things at high level of both… Of descriptions, ontologies that appear at more abstract levels of organisation, which can be absolutely essential for our understanding of a system and they are real too. Daniel Dennett talks about real patterns. The fact that something is described at a higher level doesn’t mean that it doesn’t exist. It just means that a higher level of description can be very, very useful for our… Kind of essential for our understanding of how a system works. But it doesn’t mean there’s some disruption to this sort of picture of physical causality that ultimately runs right down to whatever reality really is, which again, is in your wheelhouse, not mine, fortunately.

0:24:45.6 SC: But we did… I’ll also plug the appearance of Dan Dennett on the podcast where we center the whole conversation on this idea of real patterns and how large-scale things can have an identity of their own, even if they’re just depending on the small-scale things. And as you imply, there are those who take the opposite tack, right? That you need to sort of add more ontological categories at each level, and consciousness is gonna be something that only exists at this higher level. But the challenge that those people would give to you and me is, again, demagicafy this word that you’re using of emergence. And so, if you think that consciousness or experience or whatever is not a separate category, if it just comes out of the motion of atoms and neurons, etcetera, at some level, how exactly does that happen? So I was thrilled to see that you’ve actually written a paper about at least beginning, maybe we can say, to understand how exactly that happens when you can talk about a complex system with many moving parts in terms of a higher level emergent description. So why don’t you tell us the punchline to that paper?

0:25:57.3 AS: I’ll be happy to. It is, as you say, it’s very much a starting point, and there’s actually a few different approaches now, and I think this is for me a promising sign, because I don’t know which approach is going to be right, and having a diversity of different ideas out there is a healthy situation to be in. And the paper I think you’re referring to you is a very recent one with my colleague, Lionel Barnett…

0:26:22.3 SC: Yes.

0:26:23.0 AS: Who’s a proper mathematician in our collaboration, and he sort of takes vague ideas and makes beautiful, beautiful concepts from them. But it actually began for me about 10 years ago, it’s… The first way I thought about how to operationalise this idea of emergence, was really taking Mark Bedau’s idea about weak emergence and thinking how can we build some simple measures that make that work in practice? And so, you unpack it one stage further. His initial proposition was that a weakly emergent property is… You have to run the microscopic level exhaustively to extract the macroscopic property. You have to simulate it entirely, there’s no shortcut. Conceptually, he described a weakly emergent property. Let’s think about the flock of birds again, just to give it some… Just to guide our intuitions. We have a flock of birds wheeling around the pier here, to call it weakly emergent is to say that the flock is simultaneously both dependent on the birds that make it up, and it’s not that you have a flock floating somewhere where the birds are.

[chuckle]

0:27:40.5 AS: The flock is made of the birds.

0:27:41.5 SC: Yeah.

0:27:42.8 AS: Philosophically we would call it supervenient on the birds. But the flock seems to have an autonomy. It has a, “This is a life of it’s own” thing. The behavior of the flock seems to be more than the sum of the behavior of the individual birds. In some interesting way that leads us to say, “Oh, it’s a flock and it’s not just birds flying randomly all over the place.” Or, birds flying in some sort of super fighter jet formation where they’re very rigid and there’s no interesting dynamics going on.” So my challenge then was, how do we measure that? Let’s say we have a simulated bird flock. What’s a way of applying a measure so that we get a high number when it looks like a flock and a low number when it looks like the birds are just randomly doing their thing, or flying in a rigid formation? And the approach then that I took was to use a method that I’ve been using in neuroscience for a bit, called Granger Causality. And this is… Speaking of terrible names, this is another terrible [laughter] name ’cause Granger Causality has nothing to do with causality, it’s to do with prediction.

0:28:49.7 AS: And to unpack it very simply, it basically provides a way of measuring information flow between two variables. Let’s say you have two variables that change over time. We’re used to thinking whether they are correlated or not, do they share information? And correlation is a bi-directional notion. If A is correlated with B then B is correlated with A to the same extent. It’s symmetric. And information theory, as you know, it’s mutual information would be the equivalent. They share information, but imagine if you could put an arrow on it and say that, “A is conveying information to B, but B is not conveying information to A, or is conveying less information.” And there are ways to measure that, statistically. And what Clive Granger, who developed this concept of Granger Causality did, was basically say, “You can say that A Granger causes B,” and I have to get this right, ’cause it always messes me up when I’m trying to explain.

[chuckle]

0:29:58.6 AS: You say that, “A Granger causes B, if A contains information that helps you predict the future of B that’s not already in the past of B.” You have a time asymmetry going on here now, ’cause causality is often about time. It’s just intrinsically caught up with our notions of causality. So basically, A is giving you information that helps you predict how B unfolds, that’s not already in B. And now you can see that this is not necessarily symmetric.

0:30:31.0 SC: So information is flowing from A to B in that sense?

0:30:35.2 AS: Exactly. So it’s a way of actually measuring, given two time series that fluctuate over time, this could be trajectories of birds in a flock, or the trajectories of prices in the stock market, or the electrical voltages of neurons in the brain. It could be anything described in the form of variables that change over time, time series. You can ask the question, Does one Granger cause the other? Which is equivalent, Does one transfer information to the other? The equivalent of Granger Causality in information theory is called Transfer Entropy. It’s that idea that it’s now not a shared information, but A is giving information to B ’cause it’s helping predict its future. So that…

0:31:20.8 SC: When you say the equivalent, are they mathematically the same or are these two labels that mean slightly different things in different contexts, Transfer Entropy and Granger Causality?

0:31:31.8 AS: I’m very glad you asked that question, ’cause it’s one of those beautiful examples where conceptually they’re very, very similar, but they came out of different mathematical contexts. So Granger Causality came out of this statistical framework of autoregressive modelling, which is just a… It’s a way of saying you model variables based on weighted sums of their past. It’s just one particular statistical framework. Transfer Entropy, same concept, but the mathematical infrastructure for it is information theory. And I always thought that they were very closely related and that somebody would have shown that they were identical under certain conditions, but my colleagues at Sussex… This was now 11 years, 12 years ago. So Lionel Barnett and my other… He was Post Doc at the time, Adam Barrett, basically realized that nobody had shown that, and showed it. And it was one of those great very quick papers that we did.

0:32:33.3 SC: Yeah.

0:32:33.9 AS: They did it, really.

[chuckle]

0:32:34.3 AS: And we showed that if variables are Gaussian, which is to say if they’re described by normal distributions, bell curve distributions, an assumption you might often make, then in fact, Granger Causality and Transfer Entropy are exactly equivalent. Where one is one half the other. Which is really nice, because it actually…

0:32:54.4 SC: Yeah.

0:32:54.8 AS: It’s not a trivial thing because you connect now two different domains of mathematics in a way, you connect this whole framework of autoregressive modeling, which is very convenient to work with. It’s very easy to build models of data that way. But you now can translate it directly into information theory and talk about bits per second of information flow and have a measure of information flow in terms of bits that you don’t get the other way. So, there is a very deep relationship between the two concepts.

0:33:29.4 SC: So, let’s just give ourselves an independent definition of Transfer Entropy. Is it something like, how many bits of information are flowing from one series of events to this other series?

0:33:43.5 AS: Sort of, I think that… Yes. I mean, that would be the way of describe… Interpreting what the transfer entropy metric means. To say what transfer entropy is, in information theory terms, if I can get this right, it’s, again, you’ve got your two variables, and it’s to say that the… It’s the degree to which the future of, let’s say, B, is conditionally dependent on the past of another variable, A, conditioned on its own past. So, it’s always this thing about…

0:34:17.7 SC: So, let me…

0:34:18.0 AS: What is another variable bringing to the table, in terms of predictive ability or additional information?

0:34:29.0 SC: So, you have B all by itself and you can say, well, from what B is doing, like if B is a football flying through the air, obeying Newton’s Laws, you know? And you can predict what it’s going to do next. But then, there’s some other variable, that maybe if you knew that, also, would or would not teach you even more than from what you knew about the past of the football.

0:34:51.2 AS: Yes. Although, you just raised one of the important constraints in practice, where these things make sense, which is that they only really can be used in stochastic systems. There has to be some…

0:35:00.2 SC: Oh, okay.

0:35:00.3 AS: At least apparent randomness to what’s going on. If it’s deterministic, you don’t get any… You already know what’s gonna happen, so there’s no way to compare how much more you know by introducing another variable. So, these things…

0:35:15.0 SC: Well…

0:35:15.6 AS: Have certain domains of application, at least in the way we would use them, that they have to be applied to stochastic systems, they’re just stationary, and so on, and so on.

0:35:24.0 SC: I mean, would it count to have apparent stochasticity, because the fundamental laws are perfectly deterministic, but we don’t know exactly the initial conditions, like we have in statistical mechanics?

0:35:34.5 AS: Yes, yes.

0:35:34.6 SC: Okay.

0:35:35.2 AS: So long as it’s stochastic, with respect to the tools you’re using to model the system, then it’s okay.

0:35:42.5 SC: Okay. But the…

0:35:44.9 AS: Then these things work fine, but into…

0:35:47.9 SC: So, with that…

0:35:48.0 AS: Your example is… Yeah, it makes sense. You can just watch… Basically, you watch one thing going on in the world, you can imagine… Let’s go back to a neuron. A neuron is firing, and you can try to figure out what will… You could try to predict on the basis of the past of that neuron firing, what its future firing is going to be like. And then, you can just ask the question, okay, I can do a… Maybe I’m 70% good at predicting the future firing of this neuron, now I look at another neuron, can I do better by bringing in knowledge from this… What this other neuron is doing? And if I can, in some… In this statistical way, then yes, there’s information flow between the two. There’s Granger Causality between the two. But we’ve gone quite far from emergence here.

[chuckle]

0:36:32.9 SC: Let’s bring it back.

[chuckle]

0:36:34.5 AS: But the next step is actually pretty simple, which is instead of thinking of two neurons, or two birds flying around, or two stock prices, we think of two levels of description. So, you’ve got your macroscopic level of description, and your microscopic level of description, you’ve got your flock of birds, and you’ve got your individual birds that make it up. And now, you can apply some of these same concepts to characterizing the relation between the flock and the birds, between the macroscopic and the microscopic. And now, there are many options for how you might use these concepts to come up with a range of different measures of emergence-like things.

0:37:19.1 AS: So, for instance, you could say, and this was my original approach 10 years ago, I could say, okay, does the flock, as a whole, predict its own future behavior better than I can do from just the birds alone? Is there some self-causality, self-information for the flock, conditioned on the parts that make it up? And if so, then I could say, well, that’s a way of operationalizing this idea that the flock has a life of its own. It’s driving its own behavior in a way that goes beyond what I can say by looking at the past. Now, this isn’t to say there’s something spooky going on, because to make that claim I have to have imperfect knowledge of the system. It’s only just a way of saying, given imperfect knowledge of the system, some things will look like they’re flocking, other things won’t…

0:38:15.7 SC: Yeah.

0:38:16.1 AS: And can I distinguish these cases? And it turns out, yes, I can, by using this method. So, that’s one approach. In another approach, and this is what, with Lionel Barnett, we were working on recently, it’s a slightly different thing. Imagine that you don’t know that there’s a flock. This is another question that comes up in emergence. If we often ground it with these discussions of what’s intuitive, like a bunch of birds that flock is intuitive, we know. We can see there’s something going on there that’s interesting. Gliders in the game of life. It’s they leap out at you, which is why they’re interesting. But maybe emergent properties don’t always leap out to us as observers of them. And if I look at a whole bunch of neurons flickering under some calcium imaging thing, maybe they’re all synchronizing together. That’s pretty obvious.

0:39:04.1 AS: But if they’re not, if they’re just flashing on and off, it’s very hard for me, as an external observer, to know whether there’s anything interestingly weakly emergent in their global patterns. And that’s the problem of identification of an emergent property. So, what, with Lionel, we were interested in, was can we develop methods that allow us to, in a data-driven way, identify candidate weakly emergence, macroscopic properties? In other words that would be coarse-grainings. Higher level abstractions of the system that have this kind of property. And we did it in a slightly different way. So, for Lionel, the key idea was that a candidate weakly emergent variable must be what… We call dynamically independent from its microscopic underpinnings, which just, again, this just means that knowing what’s going on at the microscopic level does not help you predict what’s going on at the macroscopic level.

0:40:08.4 AS: So that’s why we use that word, “Dynamical Independence.” Doesn’t necessarily mean in this case that the macroscopic level has to predict itself in any interesting way, it just… However much you can do that, it just has to be independent of what’s going on in the [0:40:25.6] ____. Which is why I say there’s this whole variety of different options now, how to think about what an emergent property might be. And what we’re doing in my group at the moment is trying to flesh out many of these different directions and figure out how they relate. There’s not gonna be one single answer.

0:40:43.7 SC: Well, I think… I don’t wanna let this go by too quickly, because what you just said is not only very beautiful, but philosophically really, really important, I think. But when I have these arguments with people who would like to let a richer ontology into their universe, so they wanna, like we said before, have new fundamental concepts at every level. And in practice, if I say, it’s not exactly applicable to your definition, but if I say a chair or a table is emerging from a bunch of atoms, well, I’m helping myself to the fact that I see tables and chairs and I know what they are already, long before I ever knew what atoms were. And I think that a lot of the people who want these richer ontologies are saying, “There’s no way you’ll ever find tables and chairs if you just start with atoms.” And so your response, to gussy it up a little bit is, “Yes, I can and here’s how to do it. Here are the equations that say, “Here are the conditions under which we find these emergent structures.”

0:41:46.1 AS: Yeah, that’s right. That’s the intuition, that’s the motivation anyway, that’s… I don’t know how far you get that way, but I think… I always wanna push back against the temptation, as you say, to just bring in new things because it seems like you can’t get there without doing that. This is the whole… It’s the same sort of intuition that I think drives the “hard problem,” this idea that you’ll never get to consciousness just by thinking about what neurons do. But there’s a whole, there’s a lot of things neurons can do that we’ve not yet learned to think about, and we just need to, we need to exhaust the possibilities of thinking about what very complicated systems of billions of neurons can, in fact, do before reaching the conclusion that consciousness is not among them. Now this might seem philosophically naive because you might be able to say, “Look, however complicated it is, it still will not get there.”

0:42:39.9 AS: But I’m just inclined to bracket that and say I might be being philosophically naive here, I still think we’ve not exhausted the possibilities of the kinds of things that physical systems can do, given sufficiently rich interpretation of them and let’s just see how far we get. And be guided by the ultimate target, whether it’s consciousness or emergence, or… Here’s the thing, I see emergence in this sense as a way to enrich our descriptions of physical systems that might have relevance to consciousness. I’m not saying that we will demonstrate that consciousness is an emergent property or come up with some equivalence, but it allows us to characterise the behaviour of complex systems in ways that might help us get closer to the explanatory target of consciousness, like there is a sense in which conscious experiences are unified and global and seem to be more than the sum of the things that make them up.

0:43:40.8 AS: So it might be very useful to have something in our tool box that allows us to assess these claims in general, and then see how they stand up when we apply them to, let’s say, the brain dynamics when people lose consciousness, are under anesthesia or fall asleep or other such states. Do weakly emergent properties dissolve in those cases or not? It’s an empirical question.

0:44:03.2 SC: And if you think that you can answer this question using equations… That’s always the best part, there are equations here, this is not just some words we’re throwing around. So you can say I have a complex system made of many little pieces, here are the chunks I need to divide it up into to get emergent behaviour. The next hard question, hard in the old-fashioned sense of hard, not Chalmers’ sense, is, is this generic, this kind of behaviour? Is this robust? Like when I have a whole bunch of little things, will it inevitably be the case that I can chunk it up into some emergent big things? Or are there multiple different incompatible ways of chunking it up into big things? Or is the generic situation that there’s no way that emergence is a special delicate flower of some sort?

0:44:52.4 AS: Yeah, these are all great questions, and I think you probably have better answers than I do. I don’t know. I think that it’s appealing to me… When we think about this approach of discovery of candidate weakly emergent properties, one other appealing thing about that is we don’t have to make assumptions, or too many assumptions about that. Don’t have to assume there’s a single level at which emergence plays out. You can, in fact, look for emergent properties at multiple different levels of abstraction, what we would call multiple different coarse grainings, and in that sense, figure out an emergence portrait for a system.

0:45:35.1 AS: Do all systems have emergence portraits? Well, yes, but some might be trivial, some might be just like, well, there’s really nothing interesting happening at any given scale. And I could construct, just for instance, a system of totally randomly interact, random particles just moving around, not interacting with each other at all. For me, I would be happy, or I’d be reassured for any candidate measure of emergence to come out basically flat, however you looked at that, at that system. Because that’s not a system where I want to see, expect to see emergence. I would then have to struggle with what do I mean by an emergence if it can happen in a system where nothing is interacting with anything.

0:46:21.6 SC: And I think… I’ll make a guess, I don’t think I know the answer to the question that I posed myself, but my guess is, personally, that emergence is a rare kind of thing. In this space of all systems we can imagine, the existence of these higher level descriptions that are as good at predicting what will happen next as you can be without extra microscopic information, is probably very unlikely if you just picked randomly how to coarse grain in some sense. And in fact, I think that it…

0:46:52.9 AS: Yes, I think… Sorry.

0:46:54.3 SC: It opens up, maybe even more things to explore like the nestedness of these descriptions, right. So not only do we imagine that atoms emerge into a higher level description in terms of cells in biological organisms, and cells emerge into a higher level of organisms, and organisms to societies or whatever, but probably there is no universe in which something like atoms emerge into something like cells and something like organisms without it being nested, right? Without the organisms themselves emerging from the cells in some way. And these are all the speculations, conjectures. Let’s call it a conjecture, that sounds more impressive, but that’s the kind of question we can now start investigating.

0:47:41.7 AS: Yeah, that sounds appropriate. I do think that the first thing you said though, I think is probably… It’s still probably a conjecture, but I think it’s quite easy, at least in some systems to check and validate. So it’s certainly the case that for many sorts of systems, you might write down that arbitrary coarse grainings will not have this property of emergence, will not have this property of dynamical independence. So it will be rare for many example, classes of system, which suggests that it’s rare in general, but then the real world is not…

[chuckle]

0:48:19.2 AS: The real world is complicated. So quite how rare these things are in the world as it is, is much harder to make a strong statement about. And the nestedness question is very interesting as well, very hard to get a quantitative grasp on that. You have to do something that gets a little bit recursive and that gets complicated.

0:48:37.0 SC: That’s why we have graduate students, right? That the young people are energetic enough to address these questions, but okay. So now we have some framework on the ground, we’ll link to the paper if people wanna look it up there. I’ll warn you ahead of time listeners, there are a lot of equations in the paper, but that’s good. It’s healthy for you. I was surprised to learn there’s a whole book that Lionel wrote about Transfer Entropy that people can try to learn the basics about, but let’s go back then to our initial motivation for this, which was consciousness. So have we learned anything from this investigation about the claim that consciousness is a kind of emergent phenomenon?

0:49:16.8 AS: I would say not yet, besides just the conceptual clarifications, besides deflating a little bit, this association that people intuitively make between consciousness and strong emergence. Just by showing that there are other ways to think about emergence, I think is a contribution. Another way that contribution plays out is that you can also think about downward causality or top-down causality in this framework in a meta-physically innocent way. And you don’t have to think about competing causes where you have actual top-down causes that compete with causes at the micro-level, and then you have all these problems of which cause dominates, so on. No, I can simply say from the perspective of an observer, are there occasions where the macroscopic variable, whatever it is, helps me predict the evolution of the microscopic components better than knowing what the microscopic components are doing?

0:50:23.7 AS: And again, this is not introducing anything that challenges a physicalist picture where causes just run all the way down, but there might be systems where that’s the case, and there might be systems where that’s not the case. Back to the original, bird flocking thing. It turns out that certainly for the measure I was using 10 years ago, that indeed when you have a bird flock, you do observe information flow from the flock to the individual birds in a way that you don’t when they’re all flying randomly around. So just having these things in your toolkit helps us resist some of the otherwise unfortunate tendencies to think of consciousness as necessarily something magic.

0:51:06.2 SC: Yeah.

0:51:07.3 AS: The work to be done is how much purchase empirically and how much explanatory insight do these concepts offer in practice when we flesh them out? And that’s something that is a story yet to be told. There’s a few groups where one group’s doing this, there are some other groups doing this, people at University of Wisconsin in Madison in Giulio Tononi’s group have other sorts of measures of emergence. But there’s a lot of tricks in this in how you actually apply these in practice and what assumptions you have to make and all the usual stuff, which doesn’t make it easy but my hope would be that if we could, as a first step, show that weakly emergent variables can be identified in conscious states that are not there in unconscious states. They can maybe use to predict levels of consciousness in people and maybe with better accuracy and fidelity than other measures of global brain dynamics. I think that would be a start. I certainly don’t think it’s suddenly going to be the solution to all our questions about consciousness, or not at all. It’s just another way of building explanatory bridges that might carry some of the weight of this apparent mystery.

0:52:29.7 SC: But one of the interesting things about your proposal, based on the Transfer Entropy to define dynamical independence and therefore emergence, is that it talks specifically about the internal, the self-dynamics of the system. From what the system has done, what will it do next? You could imagine a different approach based on the fact that one of the features of the flock of starlings is that I see it as a flock, right? One could imagine basing a theory of emergence on the fact that… Or a theory of coarse-grainings and macroscopic states based on the fact that I only have observational access to certain features, right? When I see the cream and the coffee mixing together, I see the gross features of where the cream is and where the coffee is, not the individual atoms, and therefore, I talk about cream and coffee as higher level emergent phenomenon in some sense. So is there… I don’t even know what my question here is. Is that way of thinking about emergence in terms of observational capabilities or access the same as, related to, independent of your internal dynamics way of thinking about it?

0:53:45.3 AS: I think it’s related, although… I think we’re both speculating about this now. I think it might be related in the sense that if we have a data-driven means of identifying emergent properties, they stand as hypotheses for the sorts of things that might observationally stand out to us. But maybe they won’t. And part of the reason I’m interested in this is, that I don’t wanna make that assumption… And I want to be open to the possibility that there will be weekly emergent things that do not leap out to us. There may also be the converse, there may be things that leap out to us that are not in any interesting sense emergent. They may be what we call before is nominally emergent. They’re just properties that inhere to a whole that cannot inhere to the parts, but not in any particularly interesting sense.

0:54:40.8 SC: Well, one of the reasons why I asked is because, number one, I had been wondering about that question independently, but number two, when it comes to consciousness, one of the facts, features, I should say, of consciousness that you, yourself have emphasized, is how the brain constructs a picture of the world based on highly limited data, right? How we don’t just look at the world in terms of pixels and then build something up in a systematic way, we come with kind of templates of some sort. Maybe I saw should let you say these in your own words, ’cause you know what you’re talking about. But explain a little bit about how that works?

0:55:17.4 AS: That’s indeed… That’s actually the line of work that I’ve been mainly following for the last few years as well, and to some extent it’s gone along relatively independently of our work on emergence. And so one of the interesting prospects is how these things will interact just as you’re raising with your question. And I don’t know yet is the answer to that question. But they may. The idea of how the brain forms its perceptions based on sparse sensory data… For me, that’s grounded in a different way of thinking about what brains do, which is in terms of brains being prediction machines of one sort or another. This is again an extremely old idea that goes back in philosophy. You can trace it back to Plato, to Kant, to wherever you want to stop on the way, that we don’t perceive. We don’t have direct access to reality as it is. Everything we see is some sort of interpretation of something that is ultimately unknowable.

0:56:22.4 SC: Got it. Yeah.

0:56:23.0 AS: And in Psychology, there’s this tradition, going back to people like the German polymath, Hermann von Helmholtz, thinking about the brain as is an inference engine, and perception as the result of a process of unconscious inference. And the idea here is really quite straightforward, it’s that sensory signals that bombard our sensory surfaces, the light waves that hit our retinas, the pressure waves that hit our hair cells in our ears, they don’t come with labels on, saying what they’re from. They don’t come with labels, saying which part of the body they’re hitting. They just trigger electrical signals which flow into the brain. And in the brain, it’s dark, it’s quiet, there’s no sound, there’s no light. The brain has to make sense of these noisy and ambiguous sensory signals. And the idea about how it does this, is that it’s doing some kind of Bayesian inference on the causes of these sensory signals.

0:57:20.2 AS: The brain is always trying to figure out what are the most likely causes of the continual barrage of sensory signals that it swims in. And the content of our perceptual experience at any one time is, the brain’s best guess. It’s the result of this process of inference. It’s the posterior. It’s combining sensory data with some prior expectation or belief about the way the world is. And these prior beliefs can come from evolution, from development, or from your experience a few minutes ago. All of these prior expectations provide context for interpreting ambiguous sensory signals. And it’s the interpretation that… That is what we perceive. I think that’s the stronger claim.

0:58:06.2 AS: That what we perceive is not some redoubt of sensory signals, that we just extract features of increasing complexity as these sensory signals stream into the brain, but the sensory signals are really there just to update and calibrate our top-down perceptual predictions. And it’s the collective content of these top-down perceptual predictions, that is what we perceive. There’s just another slight extension to this, which I think I ought to say for the whole thing to make sense, which is, it’s one thing to say that the brain is doing some Bayesian inference on the causes of sensory signals, that it’s somehow doing this inference. How is it doing it? Again, there could be many ways in which brains could accomplish something like this. One of the most popular proposals is that, it’s engaged in predictive processing, or sometimes called prediction error minimization. And this is the idea that the brains always has some kind of best guess about the causes of its sensoria, and that it’s continually updating that by using sensory signals as prediction errors. So the stuff that’s flowing into the brain from the outside world, is really just the error, the difference between what the brain expects and what it gets at every level of processing within the brain. This is kind of counter-intuitive.

0:59:30.1 AS: We’re used to thinking in terms of perception as reading out the sensory signals. But I’ve come think of it now as, “No, the sensory signals just calibrate, and what we actually perceive as the stuff going in the other direction, the top-down predictions that are being reined in by the sensory prediction errors from the world.” And that process approximates Bayesian inference. If you have a system that’s implementing this prediction error minimization, then with some other assumptions, you’ll find that it does actually approximate Bayesian inference. So, this is the way, or this is at least one proposal about how the so-called Bayesian brain works.

1:00:15.1 SC: So, let me dig into that a little bit, because on the one hand, I love it, but on the other hand, I don’t really understand it. So, the idea that we have this… The brain makes a prediction for what it’s going to see, and I can see that an MPEG file, right? An encoded video file on the internet, they saved a lot of storage capacity by figuring out that all you have to update is how the image changes, not including what the image is at every moment, right? So, the brain is doing something like that. But clearly, there are moments when I look at something completely new, when a movie starts, and I see something, and there has to be that first flash of recognition. Does the brain shuffle through a bunch of possibilities, or do we even know what’s happening in those moments?

1:01:03.6 AS: Maybe, it’s… Yes. There’s some interesting challenges. So, there’s challenges about how can you see something new for the first time if you live in a world of the already expected?

1:01:13.1 SC: Yeah.

1:01:15.1 AS: But I think there are ways to address these challenges, and the first way is, that perception in this view is something that’s very deeply hierarchical, that are high-level perceptions about what’s going on. I see a movie star, or I see whatever happened, a ship out in the sea, on the beach here. Those high-level perceptual contents are built up out of much lower-level things. And classic vision science tells us that. That part of our visual system deal with detecting variations in brightness, and then a bit deeper in, you get lines, and then line segments, and shapes, and all the way up to faces, and people, and objects, and places. And so, even if you see something that you haven’t seen before, like maybe a movie star that you weren’t expecting to see, it’s still going to share a lot of the same lower-level features with other things that your perceptual system is very used to making best guesses about. And so you will… So you still do live in a world of the…

1:02:17.7 SC: Right.

1:02:19.2 AS: Mostly already expected, and it’s only at the last bit that you have to make a little leap, and see something new. And sometimes that might be even accompanied by this psychological recognition that I’m seeing something new, some sort of surprise thing, too. So, I think it does work, and the brain also learns… So, one of the other components of this way of thinking is that the brain encodes something that we’d want to call a generative model. So, it encodes a model of the causes of sensory signals. This is how… This is what supplies the predictions that then get compared against the prediction errors.

1:03:02.1 AS: So, in a sense, everything that we perceive is constrained, or everything that we can perceive is constrained by the generative models that are encoded in our brains. But this generative models can change and develop over time, and we can therefore learn to perceive new things through experience. And I think we’re all familiar with this, in some ways, when you start drinking red wine, they all taste the same, but then after a while, you learn to make discriminations, and you have perceptually different experiences. Your generative model has developed to be able to make distinct predictions for distinct kinds of sensory signals, whereas, previously, it wasn’t.

1:03:46.1 SC: It reminds me of stuff I read a while ago when I was thinking, when I was writing my first trade book about the arrow of time, about how memory works in the brain versus imagination and prediction working in the brain. And fMRI studies saying, that they used very, very similar parts of the brain, maybe the same parts of the brain, in some sense, which led to a hypothesis… Which, I’m not sure if it’s continued to be popular or not, that what we stored in our memories was not a videotape of sets of images that we saw, but more like a screenplay. And like, there’s a little puppet theater in the brain that we could feed in the script, and it would put on a show every time we wanted to remember something. So, we had some shapes, some sounds, some pre-existing concepts we could put into play, and then the data we needed to bring those to life was much more compressed than if we literally just had a whole bunch of images.

1:04:46.8 AS: Yeah. I think there’s something right about that. There’s certainly something very wrong about the idea of memory being a videotape.

1:04:53.1 SC: Right.

1:04:55.9 AS: Or, in general, being some sort of narrowly implemented file storage system. That, I think, is an example of taking the computer metaphor of the brain too far. Computers are useful metaphors, up to a point, but, I think, over-extended, they can be radically misleading. Memory definitely doesn’t work like that in the brain, and there’s so many empirical examples of that, not least that we tend to have pretty bad memories.

1:05:20.9 SC: Right.

1:05:21.6 AS: And the more often you remember something the less accurate that memory becomes. Every act of remembering is a active regeneration. As you put it, it’s people in the screenplay re-enacting the scene, or something like that. So, that every time you do it…

1:05:38.1 SC: And the…

1:05:38.2 AS: You change it a bit. This has been a notorious problem in things like eyewitness testimony.

1:05:42.9 SC: Yeah.

1:05:43.9 AS: That people’s memories become progressively less reliable, but often they develop the conviction that their memory is becoming more reliable, when, in fact, the opposite is going on. But I think there’s a lot of overlap between these ideas of perception, imagination, memory, dreaming even, all these categories that might seem to be separate, leverage, and utilize, and refine a highly overlapping set of underlying mechanisms. So, there’s one idea that I really love in this area, it’s been around for a while, but it was very beautifully articulated recently by Erik Hoel, which is this idea of dreams as refining the generative models in the brain.

1:06:30.9 AS: So if you can imagine doing… Walking around doing during your everyday life, you’re perceiving lots of things. Your brain is trying to fit all this sensory data that’s coming in, but as with any statistical model, you can over-fit. If you try and fit too many data points you won’t be able to generalize very well to new things. This is just very basic stuff in statistics, right? That…

1:07:00.4 SC: Yeah.

1:07:01.0 AS: You just fit all the data points, then you have a new situation, and you find out you’ve not captured the invariances that really matter. And so, you want to guard against over-fitting. And so, one idea that Erik talks about, is that dreaming is a way of the brain pushing back against this daily over-fitting during perception. It’s freewheeling its generative model, creating all the unnecessary connections, getting back down to the basics, so that you can see better the next day.

1:07:35.1 SC: Okay.

1:07:35.7 AS: It’s still an idea. There’s very little evidence for it, but to me, it’s a lovely way of thinking about what dreams are. They’re not just replays of what happened in the day. It’s also not true they’re fundamentally meaningless either. They may play an interesting semi-computational role in tuning our perceptual systems.

1:07:56.3 SC: Well, there’s at least a very cheap and obvious connection between this discussion and the emergence discussion based simply on the fact that coarse-graining is really, really important, right? Data compressibility is really, really important. And I think that from a physicist point of view, normally I like to play the role of the physicist adding insight here, but I think the that physicists are caught a little bit in this dream of being Laplace’s demon, right? Like, if we had perfect information, what would we be able to predict about the future, et cetera? Whereas almost all of our experience and understanding of reality comes on the basis of very, very tiny amounts of data compared to the whole thing that is out there. And in both this idea of that the brain is a inference engine, and predictive processing, and so forth, and the idea of emergence and higher-level descriptions, we’re thinking, or discovering, ways to say sensible, useful things about the world by saying a very, very tiny fraction of everything there is to be said.

1:09:00.6 AS: Yeah. I think that’s right. I think there is a connection there, quite how much use you can make of that connection is something I don’t have a good intuition about. But, certainly, this idea of coarse-graining runs both through predictive processing, where, indeed, you extract relatively abstract, high-level models of the causes of sensory data, which allow you to generalize… And the general case of weak emergence that we were talking about… That’s true. What we do with it, I don’t know.

1:09:30.2 SC: What we do with it. Yeah. Well, there’s… We need to leave open problems for the listeners to solve. This is part of our job here on the podcast, but… Okay. Winding up, I do want to just let you… I’m not even sure if I had a specific question here, but there’s another really big idea that you emphasize in the new book that you have out, and elsewhere, that is very relevant to consciousness, which is the role of the body, as well as the brain in this whole thing, and it’s something I’ve alluded to on other episodes of the podcast, but I’d like to hear your take on it. The idea, again, if physicists sometimes fall into this dream stream of being Laplace’s demon, then other people who are more computer-y in orientation fall into the idea of the brain is as an information processing machine. And it could be on a computer, on a hard drive, just as well as it could be in a the human brain. But there is something in de facto about the fact that our brains are embedded in bodies, and we keep getting this input, both internally and externally, that really plays a role in what we call a consciousness. Yeah?

1:10:35.2 AS: Yes. The question is, what role, and how fundamental that is? And there’s a lot of things to talk about here, and possibly the most important one, well, certainly, the one that comes up very frequently, is this idea of substrate independence. So, there’s a very common assumption, or position, in thinking about consciousness, that it doesn’t matter that the brain happens to be made out of neurons that happen to be made out of carbon-based stuff, and so on. That if you wired a computer up in the right way, programmed it in the right way, that it would be conscious, too. This argument, I find myself just very agnostic about. I just don’t think there are good knock-down reasons to believe either that consciousness is substrate independent or that it isn’t. And if you take one position, and say that consciousness is a thing that only particular kinds of substrates, physical systems, can have, things made out of neurons, let’s say, or carbon, then, of course, you’ve got to give an explanation of why that is. And I don’t have an explanation of why that… Or a good explanation of why that must be. There are some intuitions why I think it’s not a silly idea, but there’s certainly no knock-down argument. But the same apprise applies the other way around, too.

1:12:07.4 SC: Yeah.

1:12:07.8 AS: If it is substrate independent then I want to know what’s a good positive reason for believing that, because not everything is substrate independent. The usual example is, if I simulate a weather system on a computer, it’s a simulation. It doesn’t get wet and windy inside the computer. Rain is not substrate independent. And so, what’s consciousness like? Is it more like something like play and go? Which is substrate independent. I can get a computer to do that, to actually play and go, as we’ve seen recently, with DeepMind. Or, is it something more like the weather? Which is not.

1:12:45.3 AS: All conscious systems we know of so far are housed in brains made of neurons that are embedded within bodies that are embodied in environments and so on. So it’s a good default starting point to at least wonder whether consciousness is something that requires a biological system or, to put it more weakly, in order to understand consciousness, we have to understand its substrate a bit more deeply. And I think this is useful because doing so pushes back against another unfortunate tendency of taking the computer metaphor a bit too far, which is this sharp distinction between hardware and software, that the software is the mind and the hardware is the brain.

1:13:38.0 AS: There’s no such sharp distinction in real biological systems. Yes, there are activity patterns and yes, there’s, the neurons are wired up in particular ways, but there’s chemicals washing about every time neurons fire, the structure changes a bit as well, and then how far do you go down? Even single neurons have very, very complicated activity patterns relating what, their inputs to their outputs. So there’s no clean separation of hardware from software or wetware from mindware. And if there’s no clean separation, then at what point do you even make the claim that something is substrate independent? Where does the substrate start? So that’s one reason I feel uneasy with this idea of consciousness being substrate independent independence. And that brings us to the question, what else does thinking about the biological instantiation of consciousness bring to the table, and I actually think it brings an awful lot. We talked about this idea of the brain being a prediction machine, inferring the causes of sensory signals. We tend to think of brains as in the business of perceiving the outside world and acting on the outside world. And the body, at most, is maybe something that enables this and takes the brain from meeting to meeting and, but is otherwise unimportant, it just has to be kept going. But bodies are fundamental…

1:15:08.0 AS: The purpose of having a brain in the first place is to keep the body alive. That’s the fundamental evolutionary duty of a brain is to keep the body alive, and the brain is in the business of sensing and perceiving its internal state as well. And from the perspective of the brain, the internal state of the body is also inaccessible and remote and has to be inferred. It gets sensory data about, let’s say, the heart rate and blood pressure levels and all this stuff, but they’re still comprised of electrical signals, it has to make inferences about the state of the body. But the inferences in this case are much more geared towards controlling the system rather than figuring out what things are or where they are. My brain doesn’t care where in the body my liver is, but it does care that it is doing the job that it should do. So when I perceive the internal state of my body, I don’t perceive my internal organs as having shapes or colours or locations, but I certainly do perceive how well my body is doing at staying alive, whether I’m hungry or thirsty or in pain or suffering or…

1:16:18.0 AS: The character of the perceptual experience is really determined by the role the predictions are playing, but it’s still predictions. So I think we can understand a great deal about the nature, the content of our conscious experiences of the self, these emotions and moods and the simple experience of just being a living organism, that I think grounds all of our experiences. Everything… The larger claim would be that everything that we experience, even our experiences of the outside world, ultimately grounded in the predictive mechanisms that evolved and developed and operate from moment to moment in service of regulating our bodily physiology. That’s a very deep connection between consciousness in life that is not the same as saying that you have to be alive to be conscious or that everything that is alive is conscious, but it’s saying that that’s the way to understand how our conscious experiences are formed and shaped.

1:17:17.1 SC: So since we’re past the hour mark on the podcast, we can be a little bit more speculative and not as beholden to rigor as we were in the beginning parts, so let me just… You said, I think two things that I want to have different levels of signing on to. The part about how, in fact, our consciousness is enormously influenced by the fact that we live in a body, and the body lives in a world and we’re getting inputs from inside and outside, I’m 100% on that. And I think that… In fact, I once proposed this as a solution to the Fermi paradox, “Why aren’t there any aliens?” Because we all… Because the idea would be that if you get sufficiently technologically advanced, everyone uploads their brains into the computer, and then when they are removed from the demands of living in an environment of eating and sleeping and all of those things, we decide that there’s just no point in living anymore, and we don’t do anything, that we don’t ever leave the planet. It’s become sort of a meaningless nirvana and we don’t explore the galaxy. But this begs the question of whether or not uploading is a thing that could happen, and you’ve raised this other issue of the substrate independence, which I’m less on board with a little bit, there are people…

1:18:39.7 SC: We had Nick Bostrom on the podcast who thinks we could be in a simulation. Maybe rain is substrate independent, if you simulate rain accurately enough, it’s just as good. David Chalmers of all people, [chuckle] makes the argument that things that happen in a simulation are just as real as things that happened in the real world. So do you see a distinction between those two parts of the argument, or do they sort of group together in your mind?

1:19:06.2 AS: I’m a bit suspicious of this the simulation argument of Nick Bostrom. So for me, the logic runs a bit the other way around, that the possibility of us living within a simulation requires substrate independence to be true. And that’s one of the assumptions that in Nick’s presentations of the argument, he does sort of skate over a bit and say, “Well, this is a relatively common assumption,” and it’s fine.

1:19:34.1 SC: Clearly. [chuckle]

1:19:35.6 AS: And we just have to worry about the other things, about the likelihood of civilisation getting to the stage where we have all these descendants who are, for some reason, interested in building ancestor simulations and so on. But before even getting there, I just don’t think it’s a safe assumption that substrate independence is true. If it were to be true, then indeed it might be harder to really know whether we are in some sort of base reality or in some simulation. But I think, in terms of assigning prior credences to these sorts of things, I think it’s much more likely that we, that substrate… Actually, this could be construed as a good argument for why consciousness must be substrate dependent. Because if consciousness is substrate independent, then maybe the simulation argument holds up, and we’re living in a simulation, and I don’t want to reach that conclusion, therefore consciousness must be substrate dependent. Not a very good argument, but I’ll put it out there. Maybe some people will like it.

1:20:37.0 SC: That’s okay. [chuckle] Write it up. Yeah, well, you know, I always encourage the listeners to be good Bayesians one way or the other, and I think you’ve lived up to that goal, just bringing up our prior credences right there. So setting a good example for everyone out there thinking about immersions and consciousness, Anil Seth. Thanks so much for being on the Mindscape podcast.

1:21:02.1 AS: Thank you, Sean. It was a real pleasure and a privilege. Thank you.

[music][/accordion-item][/accordion]

23 thoughts on “168 | Anil Seth on Emergence, Information, and Consciousness”

  1. Just last week I was thinking that Anil Seth would make a great Mindscape guest! Very interesting discussion, thank you.

  2. Loved this one! Anil was excellent at explaining his thoughts in a clear manner. Really easy to follow.

  3. Thanks for this discussion about the nature of consciousness. I have never understood why there is such resistance to the idea that there is more to consciousness than the physical elements of the universe. I understand that the scientific method centers around the testing of hypotheses generated by conscious processes, but those very processes may not be subject to that method. It may be similar to trying to lift yourself with your own boot straps.
    Those who anticipate the emergence of consciousness from advances in computer technology may be in for a very long wait. There are truths that cannot be proved, but are self evident to our consciousness. Roger Penrose has said he doesn’t know what consciousness is, but he knows that it is not computable.

  4. Pingback: a história rocambolesca da fusão nuclear nazista argentina, 12 mil anos de tabaco? – radinho de pilha

  5. בניה קורן

    As to the question at 35:30 – my understanding of the connection between emergence and perception is that in order to be useful in practice, a manifest image of the world must have a number of characteristics, among them:
    A. Sense data should be reasonably interpretable in terms of this perspective.
    B. The perspective must be “emergent” from the true natural law, in the sense of being practically self contained enough to allow some predictability and reasoning without knowledge of the underlying laws.

    Conversely, for the senses to be useful they must give information that is interoperable in the context of a well behaved perspective , that is among other things emergent.

    I’d like to hear of anyone have thoughts on that answer 🙂

  6. Seth’s work is very interesting.
    I would say that it’s possible to argue that consciousness is indeed part of the same components as the rest of the physical universe…but that begs the question of what exactly the physical universe is.

  7. Understanding that the focus here was emergence more than consciousness, it’s still striking that the fundamental characteristics of conscious experience – its qualitativeness and subjectivity – were never broached as the explanatory target. As Phillip Goff and others have rightly pointed out, the difficulty about consciousness is that it’s qualitative (something it’s like) and subjective (private). Unlike the systems that host them, experiences aren’t observables, and science doesn’t find qualities out there in the world as it does measurable physical phenomena.

    Both Sean and Anil are (to their credit) realists about experience, and as physicalists they understandably don’t want to add any spooky stuff (“a fundamental essence”) to a materialist ontology when explaining consciousness. Hence their commitment to weak emergence to explain it: “…a macroscopic property for which there is an explanation in terms of microscopic components.” But of course the difficulty is that the qualitativeness and subjectivity of experience don’t show up as observable, higher level, macroscopic properties amenable to reduction or other weak emergentist story in the way higher level chemical and biological properties often get reduced or emerge.

    Anil’s recent work on emergence targets the macroscopic behavior of some systems as perhaps understandable as a function of information flow, making them dynamically independent from their micro-constituents. But phenomenal consciousness is not an observable behavioral phenomenon, even though it’s strongly correlated with certain sorts of behavior-controlling functions, e.g., those instantiating a Bayesian self-in-the world predictive model. So again, emergence in this sense doesn’t seem to come to explanatory grips with the essential characteristics of consciousness, although as Anil rightly says it may well help with its structural aspects (“weakly emergent variables”), e.g., types of conscious content and its unity, level, and complexity.

    As to what might come to grips with consciousness, clues might be in some of the informational (representational) content of the predictive world model, since content is real for the system but not an observable. And some content might end up as qualitative as an entailment of blocking what would otherwise be an epistemic regress for the system. But I’ll stop with these oracular suggestions. Thanks Sean as always for the great content in Mindscape.

  8. The entire of substrate independence of consciousness underscores every poor origins argument. In this case it is the origin of computers themselves. Many make the poor assumption that computers first evolved as the Turing or Von Neumann architecture which is incorrect.

  9. Strong Emergence and Consciousness

    Consciousness, even at its lowest levels (say that possessed by an iguana for instance), is in service of not only a body, but as well, of replication in some manner. And indeed, the entire enterprise of replication and exploring fitness landscapes, so the pure physicalist should think, is a realization of some form of a fourth law of thermodynamics, i.e., generation of entropy along the path of steepest ascent.

    If consciousness is to possess the possibility of, in some sense supervening over, the apparent imperatives of physical laws governing its substrate (i.e., possess downward control of causal events to some extent), then it would seem that strong emergence is at the very least an equally pressing phenomenal issue, however mysterious.

    But then again, maybe it isn’t!

    Thanks much for the unique and stimulating podcast!

  10. Sean and Anil, thank you for this discussion. Clear, articulate thoughts on a cutting edge topic. Having listened to every Mindscape episode, this is in my top 3.

  11. The hypothetical question came up whether a computer could ever achieve consciousness. Consciousness is a rather vague concept, so the question might be stated: “Could a computer ever become aware of its surroundings, and its own existence?” For most brain specialist (who are naturally biased) the consensus seems to be that a computer composed of inorganic material (e.g. silicon chips, etc.) could never achieve that goal. But what about a computer made up of organic material? Figuratively speaking it could be said that brains are computers attached to bodies. But what I had in mind was a self contained computer made up of organic material. Recently researchers have made a biological transistor from DNA. Jerome Bennet, a bioengineer at Stanford University said that on their own these devices do not represent a computer, but they allow for logical operations, such as ‘if this-then that’ commands., one of three basic functions (the other two being storing and transmitting information). Of course that is a long way from creating a computer from organic material, much less one that’s aware of its surroundings and its own existence. But since it has been demonstrated that one of the building blocks of a computer can be built up from organic, not inorganic, material, the belief that someday organic computers could actually achieve consciousness doesn’t seem to be completely beyond the realm of possibility.

  12. Great session. This article by Prof Stephen M Fleming provides a very convincing description of one level in the emergent hierarchy to consciousness – basically that consciousness is “theory of mind” (which is our model of other peoples minds) applied to ourselves, and that it develops in the same way. Theory of mind being yet another dynamic model construction by the brain. The article provides lots of ideas on how you might measure these concepts and design systems to replicate them.
    https://aeon.co/essays/is-there-a-symmetry-between-metacognition-and-mindreading

  13. There is a difference as you are pointing out between being phenomenally conscious as we are vs conscious behavior which a machine can do. Tracking neurons in a brain and simulating them in a silicon program can certainly trick us into thinking the robot or self driving car that is behind us in the distance had a sentient being as the driver.
    Anil Seth’s analogy of a rain storm simulation in a computer does not make the silicon chips wet holds some water. However a natural storm is not an intelligent being though we can simulate the laws of atmospheric physics in the silicon.

  14. Pingback: Sean Carroll's Mindscape Podcast: Anil Seth on Emergence, Information, and Consciousness - 3 Quarks Daily

  15. No psychology whatsoever. Seth apparently was president of psychology section of he British Science Association in 2017. Nothing wrong with the topic, the search for consciousness, emergent ideas of consciousness. Does Seth have: attachment issues? alcoholism? divorce? good marriage, good relations with children/extended family? Has he ever tried to be a therapist? Helped someone with mental health issues?
    Again, nothing wrong with the topic. To completely ignore psychology, inter-relations in human behavior/experience, worked through substantive change in a therapy session with red results?
    Its kind of like describing a human by the quantities of basic elements it is composed of, and ignores completely the human in full.
    Scientists routinely work really hard to ignore, elide, and not acknowledge things like friendship, love, existence as a substantive feeling. I have the same beef with scientists who view “life” by looking to Darwin, as they do this, they can’t wait to have dinner with their peers, a bequeathment to their children/society.
    Conscious, in this forum, is alien and abstract to all things human. Interesting take on emergence.

  16. They seem to be stuck on emergence so let’s try two coinciding concepts: reality and environment. Basic particles follow laws and interact with other fields and particles or environment. Taking the bird’s eye view of the city streets and highways yield the same result of biological beings interacting with other beings, following paths and being subject to the field of weather etc. Hence we know from centuries of science that we are made from those physical particles but somehow we are scaled to this object reality. The problem presents itself as one of scaling or what process do neurons engage in to do this scaling? Hence a problem of discovering the inner workings of neurons. Like Leibniz Mill which was made from faulty gears and could not do the job. Even as a good Mill does not do the job by machinations and computations. But rather the Mill gears work by synchronization transmission of the forces of nature.

  17. A previous contributor to this conversation referred to “spooky stuff” in reference to any efforts to bring into a discussion the possibility that non material effects may be in play. The reason that Einstein worked so diligently to prove that, what he called “spooky action at a distance,” was in error, stemmed from his concern that it called into question the well established scientific tenet of locality. He was not successful. To deny that there is a possibility that consciousness may also represent some kind of spooky action may be premature.

  18. Great guest, with some absolutely beautifully and simplifying characterisations of problems that moved some of my thinking.

    Especially after the emergence stuff which I find a bit ho hum. I’m already a physicalist and that more or less rules out type two emergence. Mental models simply reduce computational effort. We are too dumb to track ten to the whatever water molecules so we see a wave and that works pretty well, at an infinitesimal comparative cost. That why we think like that and it’s but it’s also common to biological information processing in general.

  19. Great discussion. Anil Seth is thoughtful and balanced in his arguments.
    I conceptualize consciousness as the place where animal self-interest meets reality. There is no mystery about the purpose of consciousness. Consciousness evolved because it is necessary for the conscious being to survive and navigate the world. It allows the being to make decisions that help it survive and thrive. Consciousness provides only a limited subjective and self-interested perspective on the world. It is aware of what on average it needs to be aware of. So it is tied to biological life and is embodied in a being that needs it to survive. It is therefore substrate dependent. It is biological by nature. The idea of a conscious robot is almost nonsensical. What would it be conscious of? What would its interests be? Animal consciousness is motivated and embodied. We focus on what interests us. Machines have no interests and don’t care about what they do.Supercomputers that play championship level chess don’t even care whether they win or lose. So what would machine consciousness involve or do?

  20. I agree completely with Mr. Farris’ argument that consciousness was necessary for life to survive. However, the existential question is why was it necessary for life to survive? Ultimately, the mystery of consciousness does not depend on simple survival, but in the complex consciousness of our species that allows us to grow in our ability to understand the workings of the universe and glory in its majesty. It is as though the universe and consciousness are in a reciprocal relationship. Neither could exist without the other.

  21. What is consciousness, and who has it? Is it something that can be explained by physical processes taking place in the brain, or will some aspects of consciousness, such as self-awareness, abstract thinking, and emotions, forever remain unexplainable in terms of these physical processes? The video ‘What is consciousness? | The Economist’, takes a look at some of these intriguing questions.

    https://www.youtube.com/watch?v=ir8XITVmeY4

  22. I had two thoughts. The first is somewhat related to the Chalmers podcast.

    My first is that it seems to me that their is an evolutionary justification for consciousness and as such it isn’t surprising that an emergent property that we describe as consciousness would arise via evolutionary pressures. My reasoning is that without consciousness a multi-celled organism is little different than a plant. It may react to stimuli like a plant does light and have natural impulses/instincts to do certain things like eat and reproduce. But, I would argue that a consciousness confers more fitness, in that once a being is conscious it actually has desires that rise above pure instincts to survive. In a sense, consciousness seems like a natural evolutionary trait that multi-celled organisms would evolve towards.

    My second thought was related to weak emergence. Sean, I was wondering if Anil has tried to optimize a system such that it scores high on the metrics that he uses to measure emergent properties. It would be interesting to try and optimize that in computer realities and see if they correspond to our notions of emergent properties and behaviores.

Comments are closed.

Scroll to Top