The "Easy Problems" of consciousness have to do with how the brain takes in information, thinks about it, and turns it into action. The "Hard Problem," on the other hand, is the task of explaining our individual, subjective, first-person experiences of the world. What is it like to be me, rather than someone else? Everyone agrees that the Easy Problems are hard; some people think the Hard Problem is almost impossible, while others think it's pretty easy. Today's guest, David Chalmers, is arguably the leading philosopher of consciousness working today, and the one who coined the phrase "the Hard Problem," as well as proposing the philosophical zombie thought experiment. Recently he has been taking seriously the notion of panpsychism. We talk about these knotty issues (about which we deeply disagree), but also spend some time on the possibility that we live in a computer simulation. Would simulated lives be "real"? (There we agree -- yes they would.)
David Chalmers got his Ph.D. from Indiana University working under Douglas Hoftstadter. He is currently University Professor of Philosophy and Neural Science at New York University and co-director of the Center for Mind, Brain, and Consciousness. He is a fellow of the Australian Academy of Humanities, the Academy of Social Sciences in Australia, and the American Academy of Arts and Sciences. Among his books are The Conscious Mind: In Search of a Fundamental Theory, The Character of Consciousness, and Constructing the World. He and David Bourget founded the PhilPapers project.
0:00:00 Sean Carroll: Hello, everyone, and welcome to the Mindscape Podcast. I'm your host, Sean Carroll. If any of you have read The Big Picture, my most recent book, you know that one of the things we have to think about if we're naturalists trying to come to terms with the world of our experience, is the phenomenon of consciousness. Actually, most of you probably know that even if you haven't read that book, it's a pretty well known fact. The question, of course, is, "What is demanded of us by the fact of consciousness?" Can we simply hope to explain consciousness using the same tools we explain other things with, atoms and particles moving according to the laws of physics, according to standard model and the core theory, or do we need something else somehow that helps us explain what consciousness is and how it came about? So I'm someone who thinks we don't need anything else, I think it's just understanding the motion and interactions of physical stuff from which consciousness emerges as a higher level phenomenon.
0:00:57 SC: Our guest today is David Chalmers, who's probably the most well known and respectable representative of the other side, of the people who think that you need something beyond just the laws of physics as we currently know them to account for consciousness. David is the philosopher who coined that the term "the hard problem of consciousness," the idea being that the easy problems are how you look at things, and why you react in certain ways, how you do math problems in your head. The hard problem being our personal experience, what it is like to be you or me rather than somebody else, the first person subjective experience. That's the hard problem, and someone like me thinks, "Oh, yeah. We'll get there. It's just a matter of words and understanding and philosophy." Someone like David thinks we need a real change in our underlying way of looking at the world. So he describes himself as a naturalist, someone who believes in just the natural world, no supernatural world, not a dualist, who thinks it's our disembodied mind or anything like that, but he's not a physicalist, he thinks that the natural world has not only natural properties, not only physical properties, but mental properties as well.
0:02:01 SC: So I would characterize him as convinced of the problem, but he's not wedded to any particular answer. David Chalmers is a philosopher who everyone respects even if they don't agree with him. He's a delight to talk to because he is very open-minded about considering different things. Like I said, he's convinced of this problem, but when it comes to solving the problem, he will propose solutions, but he won't take them too dogmatically, he will change his mind when good arguments comes along. So he's a great person to talk to about this very, very important problem for naturalists when they try to confront how to understand what it means to be a human being and where consciousness comes from. Also, David has developed a recent interest in the simulation hypothesis, the idea that maybe we could all be living in a simulation running on a computer owned by a very, very advanced civilization in a completely different reality.
0:02:51 SC: So we'll talk about the hard problem of consciousness, we'll talk about various philosophical issues, and I won't pin him down on anything. I'm not trying to argue with him, my point here is not to convince David Chalmers in real time that he's wrong, but rather to let you, the listeners, hear what his perspective is on these issues, and then hear what my perspective is on these issues, and decide for yourself. Maybe you will change your mind either right now or sometime down the road. So, this is a fun conversation, I'm sure you'll like it, and let's go.
[music]
0:03:35 SC: David Chalmers, welcome to The Mindscape Podcast.
0:03:38 David Chalmers: Thanks. It's great to be here.
0:03:39 SC: So I've discovered in my brief history of having philosophers on the podcast that there's a lot to say, that we have a lot of ground to cover, I know that you especially have all sorts of interests. Let's just jump right in to the crowd-pleasing things that we can talk about. You're one of the world's experts on the philosophy of consciousness, you... I believe you're the one who coined the phrase "the hard problem of consciousness," so how would you define what the hard problem is?
0:04:05 DC: The hard problem of consciousness is the problem of explaining how physical processes in the brain somehow give rise to subjective experience. So when you think about the mind, there's a whole a lot of things that need to be explained, some of them involve our sophisticated behavior, all the things we can do, we can get around, we can walk, we can talk, we can communicate with each other, we can solve scientific problems, but a lot of that is at the level of sophisticated behavioral capacities, things we can do. When it comes to explaining behavior, we've got a pretty good bead on how to explain it. In principle at least, you find that circuit in the brain, a complex neural system, which maybe performs some computations, produces some outputs, generates the behavior, then in principle, you've got an explanation. It may take a century or two to work out the details, but that's roughly the standard model in cognitive science.
0:05:08 SC: And you've wrapped this together as the easy problem.
0:05:11 DC: Yeah. So this is what 20-odd years ago, I called the easy problems...
0:05:15 SC: Slightly tongue in cheek.
0:05:16 DC: Of the mind and of consciousness in particular, roughly referring to these behavioral problems. Nobody thinks they're easy in the ordinary sense. The sense in which they're easy is that we know the... We've got a paradigm for explaining them. Find the neural mechanism or a computational mechanism, that's the kind of thing that could produce that behavior. In principle, find the right one, tell the right story, you'll have an explanation. But when it comes to consciousness, to subjective experience, it looks as if that method doesn't so obviously apply. There are some aspects of consciousness which are, roughly speaking, behavioral or functional, and you could use the word consciousness for the difference between being awake and responsive, for example, versus being asleep or maybe just for the ability to talk about certain things.
0:06:11 DC: I can talk about the fact that, "Hey, there's Sean Carroll, there are some books over there and I'm hearing my voice," and those are some reports. Explaining those reports might also be an easy problem. But the really distinctive problem of consciousness is posed not by the behavioral parts, but by the subjective experience, by how it feels from the inside to be a conscious being. I'm seeing you right now. I have a visual image of colors and shapes that are sort of present to me as an element in the inner movie of the mind. I'm hearing my voice, I'm feeling my body, I've got a stream of thoughts running through my head and the hub. And this is what philosophers call consciousness or subjective experience, and I take it to be one of the fundamental facts about ourselves that we have this kind of subjective experience. But then the question is, how do you explain it? And the reason why we call it the hard problem is it looks like the standard method of just explaining behaviors and explaining the things we do doesn't quite come to grips with the question of why is there subjective experience. It seems you could explain all of these things we do. The walking, the talking, the reports, the reasoning. And so why doesn't all that go on in the dark? Why do we need subjective experience? That's the hard problem.
0:07:41 SC: So sometimes I hear it glossed as the question of what it is like to be a subjective agent, to be a person.
0:07:47 DC: That's a good definition of consciousness. Actually, first put forward or at least made famous by my colleague Tom Nagel here at NYU in an article back in 1974 called What Is It Like To Be A Bat? And his thought was, "Well, we don't know what it's like to be a bat, we don't know what a bat's subjective experience is like. It's got this weird sonar perceptual capacity which doesn't really correspond directly to anything that humans have." But presumably, there is something it's like to be a bat. A bat is conscious so... Most people would say, on the other hand, "There's nothing it's like to be a glass of water." So if that's right, then a glass of water is not conscious. So this "what it's like" way of speaking is a good way at least of serving as an initial intuition term for what is the basic difference we're getting at between systems which are conscious, and systems which are not.
0:08:43 SC: And the other word that is sometimes invoked in this context is qualia, the experiences that we have. Like, there's one thing that is... It is to see the color red, and a separate thing, if I get it right, to have the experience of the redness of red.
0:08:57 DC: Yeah, this word qualia... My sense is it has gone a little bit out of favor over the last, say, 20-odd years. But maybe 20 years ago, you had a lot of people speaking of qualia as a word for the sensory qualities that you come across in experience, and the paradigmatic ones would be the experience of red versus the experience of green. You can raise all these familiar questions about this. "How do I know that my experience of the things we call red is the same... Maybe it's the same as the experience you have when you are confronted with the things we call green, maybe your internal experiences are swapped with respect to mine." And people call that inverted qualia. That would be "your red is my green." Or pain would be another example of a... The singular in Latin is quale.
0:09:48 DC: So the feeling of pain would be a quale. I'm not sure that these qualities are all there is, though, to consciousness. And maybe that's one reason why it's gone out of favor. There's also maybe an experience to thinking, to reasoning, and to feeling. That's much harder to pin down in terms of sensory qualities, but there's still... You might think something it's like to think and to reason, even though it's not the same as what it's like to sense.
0:10:11 SC: I wanna just for a little bit talk about this question of whether or not you and I have the same experience when we see the color red. I'm not sure I know what that could possibly mean for it to be either the same experience or a different experience. I mean, one is going on in my head, one is going on in your head. In what sense could they be the same? But maybe when I say that, it's just a reflection of the fact that there's a hard problem.
0:10:34 DC: Well, we know that some people, for example... To pick a much easier case, some people are color-blind. They don't even make a distinction between red and green. They're red-green... You know, most people have a red-green axis for color vision and a blue-yellow axis, and something like a brightness axis. But some people, due to things going wrong in their retinal mechanisms, don't even make the distinction between red and green. So I can... I've got friends who are red-green color-blind. I'm often asking them, "What is it like? To be you?"
0:11:04 DC: "Is it, like, you just see everything in shades of blue and yellow and you don't get the reds and greens? Or, is it something different entirely?" But we know what it's like to be them can't be the same as what it's like to be us, 'cause for example, reds and greens, which are different for us, are the same for them, so there's got to be some difference between us, as a matter of logic. My red can't be the same as their red and my green can't be the same as... If my red was the same as their red and my green was the same as their green, and their red was the same... We know their red is the same as their green... Then my red couldn't be different from my green, but it is.
0:11:36 SC: But it is.
0:11:36 DC: As a matter of logic, there has to be some difference there.
0:11:39 SC: But isn't... So one way out is just... Well, everybody who is different from everybody else, their experiences are different. I guess the question then is, in what sense could they ever be the same? What is the meaningfulness? And I can imagine some kind of operational sameness, right? Like you say the word red when you see the word red, in that behavioral sense, but that's exactly what you don't wanna count.
0:12:01 DC: Yeah, so I guess intuitively, most people think that we can at least grasp the idea that my red is the same as your red. And then it's an empirical open question, but they are, in fact, exactly the same. I mean, you're saying, "Well, it is unfair how you can operationalize that matter." Now, you might say, "I'm a scientist, I want an operational test." On the other hand, I'm a philosopher and I'm very skeptical of the idea that you can operationalize everything, that a hypothesis has got to be operationalizable to be meaningful. I mean, there was a movement in philosophy in the first part of the 20th century, logical positivism or logical empiricism, where they said, "The only meaningful hypotheses are the ones we can test." And for various reasons, that turned out to have a lot of problems. Not least because this very philosophical hypothesis of verificationism turned out not to be one that you could test.
0:12:55 SC: There's a renaissance of logical positivism on Philosophy Twitter these days.
0:13:00 DC: Oh, is that right?
0:13:01 SC: Yeah.
0:13:01 DC: Rudolf Carnap, who was one of the great logical positivists, is one of my heroes. I've written a whole book called Constructing the World that was partly based around some of his ideas. Nonetheless, verificationism is not one of the ones. And I think when it comes to consciousness, in particular, we're dealing with something essentially subjective. I know I'm conscious, not because I measured my behavior or anybody else's behavior, it's because it's something I experience directly from the first person point of view. I think you're probably conscious, but it's not as if I can give a straight out operational definition of it. If you say you're conscious, then you're conscious. It's like, who's to say that doesn't... Most people think that doesn't absolutely settle the question. Maybe we'd come up with an AI that says, it's conscious. And okay, well, that would be very interesting, but would it settle the question of whether it's having a subjective experience? Probably not.
0:13:53 SC: Well, so Alan Turing tried, right? The Turing test was supposed to be a way to judge what's conscious from what's not. What are your feelings about the success of that program?
0:14:01 DC: I think it's not a bad... I mean, of course, no machine right now is remotely close to passing the Turing test...
0:14:07 SC: You might as well say what the Turing test is.
0:14:10 DC: Yeah. So, the Turing test is the idea that... It's basically a test to see whether a machine can behave in a manner indistinguishable from a normal human being, at least in a verbal conversation over, say, text messaging and the like. And Turing thought that eventually, we'll have machines that pass this test, that is they're indistinguishable from, say, another human interlocutor over hours of conversational testing. Then he didn't say at that point then machines can think. What he said was, at that point, the question of whether machines can think becomes basically meaningless and I've provided an operational definition to substitute for it. So, once they pass this test, he says, "That's good enough for me."
0:14:56 SC: Yeah. He talked in the paper about the consciousness objection. You might say that it's just mimicking consciousness, but not really conscious. And he's... As I recall, his response is, "Well, who cares? I can't possibly test that. Therefore, it's not meaningful."
0:15:07 DC: But it turns out that consciousness is one of the things that we value. A, it's one of the central properties of our minds. And 2, many of us think it's what actually gives our minds... Gives our lives meaning and value. If we weren't conscious, if we didn't have subjective experience, then we'd basically just be automata for whom nothing has any meaning or value. So I think when it comes to the question... Once we develop more and more sophisticated AIs, the question of whether they're conscious is gonna be absolutely central to how we treat them, to whether they have moral status, whether we should care whether they continue to live or die, whether they get rights. And I think many people think, if they're not having subjective experiences and they're basically machines, that we can treat the way we treat machines. But if they're having conscious experiences like ours, then it would be horrific to treat them the way we currently treat machines. So, yeah. I mean, if you just simply operationalize all those questions, then there's a danger, I think, that you lose the things that you really... The things that we really care about.
0:16:08 SC: And just so we can get our sort of background assumptions on the table here, for the most part, neither you or I are coming from a strictly dualist perspective. We're not trying to explain consciousness in terms of a Descartean, disembodied, immaterial mind that is a separate substance, right? I mean, we want to at least, as the first hypothesis, say that you and I are made of atoms, we're obeying the laws of physics, and that consciousness is somehow related to that, but not an entirely separate category interacting with us. Is that right? Is that fair?
0:16:40 DC: Yeah, although there's different kinds and different degrees of dualism. My background is very much in mathematics and computer science and physics, and all of my instincts, my first instincts are materialist, to try to explain everything in terms of... Ultimately, in terms of the processes of physics. I mean, explain biology in terms of chemistry, and chemistry in terms of physics. And this is a wonderful, great chain of explanation. But I do think when it comes to consciousness, this is the one place where that great chain of explanation seems to break down. Roughly because, when it comes to biology and chemistry and all these other fields, the things that need explaining are all basically these easy problems of structure and dynamics and ultimately the behaviors of these systems.
0:17:29 DC: When it comes to consciousness, we seem to have something different that needs explaining. And I think that the standard kinds of explanation, say, that you get out of physics derived sciences, physics, chemistry, biology, and neuroscience and so on, just ultimately won't add up to an explanation of subjective experience. Because it always leaves open this further question, "Why is all that sophisticated processing accompanied by consciousness, by a subjective experience?" That doesn't mean, though, we suddenly need to say it's all properties of, say, a soul or some religious thing, which has existed since the beginning of time and will go on to continue after our death. People sometimes call that substance dualism. Maybe there's a whole separate substance that's the mental substance and somehow interacts, connects up with our physical bodies and interacts with it.
0:18:21 DC: That view is much harder to connect to a scientific view of the world, but the direction I end up going is what people sometimes call property dualism, the idea that there are some extra properties of things in the universe. I mean, this is something we're used to in physics. Already, people have... Maybe around the time of Maxwell, we had physical theories that took space and time and mass as fundamental. And then, Maxwell wanted to explain electromagnetism and there was a project of trying to explain it in terms of space and time and mass. So, no. Turns out, it didn't quite work. You couldn't explain it mechanically and eventually, we ended up positing charge as a fundamental property and some new laws governing electromagnetic phenomena and that was just an extra property in our scientific picture of the world.
0:19:14 DC: So, I'm inclined to think that something, not exactly analogous to that, but at least analogous to that in some respects, is what we have to do with consciousness as well. Basically, explanations in terms of space and time and mass and charge and whatever the fundamentals are in physics these days, are not gonna add up to an explanation of consciousness. So, we need another fundamental property in there as well. And one working hypothesis is, "Let's take consciousness as an irreducible element of the world, and then see if we can come up with a scientific explanation of it."
0:19:44 SC: Good. I think this is... I mean, we should absolutely be open to that. I don't go down that road myself. I don't find it very convincing, but maybe in the next 45 minutes, you'll convince me. So I do wanna get there, but let's lay a little bit more ground work first. So, one of the things that... A lot of the statements I'm gonna be making over the course of the chat are of the form, "I think this is right, correct me if I'm wrong." So, I think one of the things that makes the hard problem hard, is just the fact that you can't even imagine looking at neurons doing something and saying, "A-ha, that explains it." Is that fair to say?
0:20:18 DC: Yeah. I would say that what we... When you appeal to neural activity in explaining phenomena, there's a paradigmatic way that works. We see how those neurons serve as a mechanism for performing some function, ultimately generating some behavior, that is the paradigmatic appeal to neurobiology in explanation. And it just looks like any explanation of that form, is not gonna add up to an explanation of consciousness. It explains the wrong thing, it will explain behavior, but those were the easy problems. Explaining consciousness was something distinct, that's the hard problem.
0:20:51 SC: So you think that even if... And we're very far away from this, but even if neuroscientists got to the point where for every time that a person was doing something we would all recognize as having a conscious experience, even if it was silent, experiencing the redness of red, they could point to exactly the same neural activity going on in the brain, you would say, "Yes, but this still doesn't explain my subjective experience."
0:21:15 DC: Yeah. That's in fact, a very important research program is going on right now in neuroscience and people call it the program of funding neural correlates of consciousness, the NCC for short. We're trying to find the NCC, that neural system or systems is active precisely when you're conscious and that correlates perfectly with consciousness, to which I say it's a very, very important research program, but it's really as it stands, a program for correlation, not for explanation. So we could know that, say, when a certain special kind of neuron fires in a certain pattern, that that neural pattern always goes along with consciousness. But then, the next question then, is why? Explain that fact. Why is it that this pattern gives you consciousness?
0:22:01 DC: And as it stands, nothing that we get out of the Neural Correlates of Consciousness program in neuroscience comes close to explaining that matter. And I think a lot of people, once they start to think about this, think, "Well, you basically need some further fundamental principle that connects the neural correlate of consciousness with consciousness itself." Giulio Tononi has developed a theory, integrated information theory, where he says consciousness goes along with a certain mathematical measure of the integration of information that he calls phi. And the more phi you have, the more consciousness you have. And phi is a mathematically and physically a respectable quantity. It's very hard to measure, but in principle...
0:22:43 SC: But you can define it.
0:22:44 DC: It could be measured. There's questions about whether it's actually well-defined in terms of the details of physics and physical systems, but it's at least halfway to being something definable. But even if he's right, that phi, this informational property of the brain correlates perfectly with consciousness, there's still the question of why. Why... Prima facie it looks like you could have had a universe with all of this integration of information going on and no consciousness at all. And yet, in our universe, there's consciousness. How do we explain that fact? Well, I think that what I regard as the scientific thing to do at this point, is to say, "Okay. Well, we boil in science, we boil everything down to fundamental principles and fundamental laws. And if we need to postulate a fundamental law to connect, say, phi with consciousness, then that's great. And then maybe that's gonna end up being the best we can do."
0:23:35 DC: Just as in, say, in physics, you always end up with some fundamental laws, whether it's a principle of gravitation or a grand unified theory that unifies all these different forces, you still end up with some fundamental principles and you don't explain them further. Something has to be taken as basic. And the question, of course we want to minimize our fundamental principles, and minimize our fundamental properties as far as we can. But Occam's razor says, "Don't multiply entities without necessity." Every now and then, there's necessity. Maxwell had necessity. And if I'm right, there's necessity in this case, too.
0:24:12 SC: And you hinted at... You sort of alluded to an idea that is one of your most famous philosophical thought experiments just there where you say, "You can imagine a system with whatever thought you want, but we wouldn't call it conscious." So, you take this idea to the extreme and say, "There can be something that looks and acts just like a person, but doesn't have consciousness."
0:24:34 DC: So this is the philosopher's thought experiment of the zombie. The philosopher's zombie is somewhat different from the ones you find in Hollywood movies or in Haitian voodoo culture. The ones in voodoo culture, as far as I can tell, are mostly beings that have been given some kind of... People have been given some kind of poison, somehow seem to be lacking autonomy, volition, a certain kind of free will. The ones in Hollywood movies are beings which are a lot like us, but they're dead and reanimated.
0:25:05 SC: Yeah, and they want brains.
0:25:07 DC: The philosopher's zombie is a creature which is exactly like us, functionally and maybe physically, but isn't conscious. Now, it's very important to say nobody, certainly not me, is arguing that zombies actually exist, that, for example, some human beings around us are zombies. Actually, I did once meet a philosopher [laughter] in Dublin who was very concerned that quite a lot of philosophers actually were zombies. They weren't conscious at all. I was a little bit insulted by this. He seemed to be worried about me. He took me to lunch...
[laughter]
0:25:40 SC: That might explain a lot. Yes.
0:25:42 DC: Yeah, yeah. He took me to lunch, and he asked me a whole lot of questions about consciousness, and at the...
0:25:47 SC: Your inner experiences?
0:25:48 DC: Yeah. Yeah. And at the end, he said, "Okay. You pass. I think you're conscious."
0:25:55 SC: Okay, but a zombie could also pass, right?
0:25:56 DC: Exactly.
0:25:57 SC: So is it right to say that a zombie... Yeah, I don't think you'd finished your definition yet.
0:26:02 DC: Yeah.
0:26:02 SC: But a zombie would be behaviorally the same, but...
0:26:06 DC: Yeah, behaviorally the same, but no conscious experience. There's nothing that's like to be a zombie. Maybe a good way to work up to this is by thinking about, say, some sophisticated artificial intelligence system that produces lots of intelligent responses, maybe it talks to you, maybe an extension of Alexa or Siri who carries on a very sophisticated conversation with us. But most of us are not inclined to think that, say, Alexa and Siri, as they stand are conscious, that they're having subjective experiences.
0:26:35 DC: Okay. Now put Alexa in a body like Sophia, the robot. There's a robot that's out there with a very sophisticated conversational system, make her smarter, and smarter. Then there's at least an open question. Is she going to be conscious? And we can make sense of the hypothesis that she's conscious. We can also make sense of the hypothesis that there's not, that she's not. The extreme case is gonna be a complete physical and functional duplicate of a human being with all the brain processing intact, all of the behavior, maybe even a complete physical duplicate of Sean Carroll. And I think I can make sense of the hypothesis when I talk to you, that there'd be such a being who's not conscious, Zombie Sean Carroll. Now, I'm very confident that you're not Zombie Sean Carroll. I think most human beings are enough like me that they're gonna be conscious, but the point is it at least seems logically possible. It seems there's no contradiction in the idea of a being physically just like you without consciousness. And that's just one way of getting at the idea that somehow, well, you do have consciousness, then something special and extra has to be going on. So I mean, you can just put the hard problem of consciousness as the problem of why aren't we zombies, what differentiates us from zombies?
0:27:50 SC: Right. And with some trepidation, let me ask the question, how the difference between possible and conceivable comes into the zombie argument?
0:28:02 DC: Yeah. Philosophers like to talk about possible worlds, what goes on in different possible worlds. And there's a possible world where Hillary Clinton won the election in 2016, and there are possible worlds where the Second World War never happened. These are all maybe not terribly distinct possible worlds. They might, for example, share roughly the same laws of physics as ours, maybe small differences in the initial conditions. Some of us think we can also make sense of worlds with different laws of physics and different laws of nature. Maybe there are classical possible worlds. Maybe there are possible worlds that are two-dimensional, like Conway's Game of Life of just bits fluttering on a surface governed by simple rules. So yeah, there are very distant possible worlds with very different laws of nature.
0:28:57 DC: The broadest class is maybe something like the logically possible world, it corresponds roughly to what we can conceive of, or what we can imagine. Maybe there are even worlds that we can't imagine, like worlds where two plus two is five, that's getting a bit too far even for... Things really start to go haywire around that point. But as long as we don't have contradictions, then we can at least entertain possible worlds. I'm inclined to think the zombie hypothesis looks to me perfectly coherent and perfectly conceivable. There is a universe which is physically identical to ours, but in which nobody has subjective experience. That's an entire zombie universe, if you like. Conscious experience never flickers into existence, there's just a whole bunch of sophisticated behavior. I don't think our universe is like that, but it seems to make sense, and one way to pose the hard problem is saying what differentiates our world from that world?
0:29:47 SC: So, where I come down here is, I don't think that zombies are conceivable, and I'm very happy to be talked out of this, 'cause I think it... I talked to you a couple of years ago before I wrote The Big Picture and I was not quite as sharp in my thoughts about this. So, like you just said, we could imagine a literal physical copy of our world. So, that includes all the people in it, all the atoms that they're made of and you do think that, as far as we know, the atoms in my body just obey the laws of physics as we know them, right? So in that world, I would be here, in that world, but without consciousness, without experience, I would be here, I'd be a zombie, but I would be acting and saying exactly the same things that I'm acting and saying now, is that right?
0:30:29 DC: Yup.
0:30:30 SC: Okay. And so, if you in that world were to ask me if I were conscious, I would say yes.
0:30:36 DC: Yeah.
0:30:37 SC: And presumably there is a sensible way in which I could say... I say yes, because I believe it to be true. Is that fair?
0:30:45 DC: Yeah, it's a complicated issue whether zombies actually believe anything...
0:30:49 SC: At the least.
0:30:50 DC: But they've got zombie analogs of beliefs at the very least.
0:30:55 SC: So, in... I mean, the most basic way to put it, then, is how can I be sure that I'm not a zombie? If all of the things that I say and do are exactly what a zombie would say and do?
0:31:04 DC: Well, I think this is a very good argument that I can't be sure that you're not a zombie, because all I have access to with respect to you, is your behavior and your functioning and so on, and none of that seems to absolutely differentiate you from a zombie. I think the first person case is different, because in the first person case I'm conscious, I know that I'm conscious, I know that more directly than I know anything else. I mean, Descartes said way back in the 1640s, this is the one thing I can be certain of, I can doubt everything about the external world, I can even... I can doubt there's a table here, I can doubt there's a body. There's one thing I can't doubt, that's that I'm thinking or I think he put it even better, he said, that I'm conscious. I think, therefore I am. So, therefore I don't... I can't doubt my own existence. So, I would take... I think it's natural to take consciousness as our primary epistemic datum. So, whatever you say about zombies and so on, I know that I'm not one of them, because I know that I'm conscious.
0:32:04 SC: But my worry about exactly that is that... So, like you said, my argument certainly would make you wonder whether I am conscious. I think it also makes me wonder whether I'm conscious, because I think that the zombie me would... Because a zombie me would behave in exactly the same way, it includes writing all the bad poetry I wrote in high school and crying at movies, at WALL-E and so forth, and petting my cats, like all of these things, the zombie would do in exactly the same way that I do. If you ask that zombie me, "Are you conscious?" It would say, "Yes, and here's why," it would give you reasons. I don't see how I can be sure that I'm not that zombie.
0:32:46 DC: I think to be fair this is... You've put your finger on I think the weakest spot for the zombie hypothesis and for ideas that come from it in my first book, The Conscious Mind, came out about 20 years ago, I had a whole chapter on this that I called the paradox of phenomenal judgment that basically stems from the fact that my zombie twin in that universe next door is going around doing exactly the same things that I'm doing and saying the same things that I'm saying and even writing a word for word identical book called The Conscious Mind arguing that consciousness is irreducible to physical processes. And I'd say, well, and that's... I mean, a lot of strange things go on in possible worlds, we shouldn't take them too seriously.
0:33:31 DC: But I'd say that, yeah, in the zombie universe, the right view is what philosophers call eliminativism, that there is no such thing as consciousness. The zombie is, in fact, making a mistake. And I think there is a respectable program about consciousness in our world that says, we're basically in the situation of the zombie and lately, just over the last two or three years, actually, there's been a bit of an upsurge of people really thinking seriously about this view, which has come to be known as illusionism, the idea that consciousness is some kind of internal introspective illusion.
0:34:06 DC: After all, think about what's going on with the zombie. The zombie thinks it has special properties of consciousness, but it doesn't. All is dark inside. So then say, "That's actually our situation." [laughter] It's like, it seems to us, that we have all these special properties, those qualia, those sensory experiences, but we're not. All is, in a way, dark inside for us as well. But there's just a very strong introspective mechanism that makes us think we have these special properties. That's illusionism.
0:34:33 DC: Now, most people find it impossible to believe that consciousness is an illusion in that way. On the other hand, the view does have the advantage of predicting that you would find it impossible [laughter] to believe, if it's a good enough mechanism that makes you focus on this. So actually, lately, I've been thinking about this a lot. I wrote an article called The Meta-Problem of Consciousness, it's just come out in the Journal of Consciousness Studies. The hard problem of consciousness is, why are we conscious, how do physical processes give rise to consciousness. The meta-problem of consciousness is, why do we think we're conscious, and why do we think there is a problem of consciousness. And the great thing about the meta-problem is, remember the hard... The easy problems were about behavior, the hard problems about experience? Well, the meta-problem is a problem ultimately about behavior.
0:35:20 DC: It's about the things we say and things we do. Why do I go... Why do people go around writing books about this? Why do they say, I'm conscious, I'm feeling pain? Why do they say, "I have these properties that are hard to explain in functional terms?" That's a behavioral problem. That's an easy problem. Maybe ultimately, there'll be a mechanistic explanation of that. And that would, of course, be potential grist for the illusionist's mill. Once you have the mechanisms to explain why we say all these things in physical terms, you could then try and turn that around with an explanation of... You can then call that solution to the meta-problem an explanation of the illusion of consciousness. Some people will still find it unbelievable. But again, the view predicts that.
0:36:01 SC: And if I wanted to know why I feel puzzled by the hard problem of consciousness, is that the meta meta-problem of consciousness?
0:36:07 DC: Oh, I think that maybe that's still the meta-problem.
0:36:10 SC: Okay.
[laughter]
0:36:10 DC: Yeah. Why you find consciousness puzzling is certainly one central aspect to the meta-problem. There are all these things that we seem to feel and say, "My red could be your green. I can imagine zombies. Consciousness seem non-physical," those are all behaviors. Explain those behaviors, and maybe you've explained at least the higher order judgments about consciousness. Now, my own view is that that, even that wouldn't add up to an explanation of consciousness. But I think, at the very least, understanding those mechanisms might tell us something very, very interesting about the basis of consciousness. So, I've been recommending this as a research program, a neutral research program, for everyone. Philosophers, scientists and others...
0:36:52 SC: Neutral in the sense it's not presuming any conclusion about what the answer will be.
0:36:55 DC: Exactly. You needn't be materialist. You needn't be dualist. You needn't be illusionists. You needn't be... This is just basically an empirical research program. Here are some facts about human behavior. Let's try and explain them. Furthermore, philosophers, psychologists, neuroscientists, AI researchers could all, in principle, get in on this. And I think there's gradually a building... There's already gonna be a target article, a symposium in Journal of Consciousness Studies with a whole bunch of people from all those fields getting in on it. So I'm hoping this, at least, turns out to be a productive way to come with the question. Of course, it won't be neutral forever. Eventually, we'll have some stuff, and then some results, and some mechanisms. And then the argument will continue to rage between people who think the whole thing's an illusion and the whole thing's real.
0:37:40 SC: We should say, though, that aside from eliminativism and illusionism, which are fairly sort of hardcore on one side, or forms of dualism, which could be on the other side, there is this kind of emergent position that one can take. This is the one I wanna take in The Big Picture and so forth, which is physicalist and materialist at the bottom, but doesn't say that therefore things like consciousness and our subjective experiences don't exist or are illusions. They're a higher order of phenomena like tables and chairs. They're categories that we invent to help us organize our experience of the world.
0:38:14 DC: Yeah. My view is that emergence is sometimes used as kind of a magic word to make us feel good about things that we don't understand. [laughter] "How do you get from this to this?" "Oh, it's emergent." But what really do you mean by emergent? I think I wrote an article on emergence where I distinguished weak emergence from strong emergence. Weak emergence is basically the kind you get from low-level structure dynamics explaining higher level structure dynamics of behavior, of a complex system, traffic flows in a city, the dynamics of a hurricane. You get all kinds of strange and surprising and cool phenomena emerging at the higher level. But still, it's ultimately, once you understand the low-level mechanisms well enough, the high-level ones just follow transparently. It's just low-level structure giving you high-level structure according to the following of certain simple low-level rules.
0:39:10 SC: You could put it on a computer and simulate it.
0:39:11 DC: Exactly. But when it comes to consciousness, it looks like... Well, when it comes to the easy problems of consciousness, those may well turn out to be emergent in just this way. They may turn out to be low-level structural functional mechanisms that produce these reports and these behaviors and lead to systems sometimes being awake. And no one would be surprised if these were weakly emergent in that way. But none of that seems to add up to an explanation of subjective experience, which just looks like something fundamentally new. This is... Philosophers sometimes talk about emergence in a different way, as strong emergence, which actually involves something fundamentally new emerging via new fundamental laws.
0:39:48 DC: Maybe there's a fundamental law that's saying... It says, "When you get this information being integrated, then you get consciousness." I think consciousness may be emergent in that sense, but that's not a sense that ought to help the materialist. I think if you want consciousness to be emergent in a sense that helps the materialist, you have to go through weak emergence. And that's ultimately going to require reducing the hard problem to an easy problem. So, I think everyone has to make hard choices here. I don't wanna let you off the hook of just saying, "Oh, it's all ultimately gonna be the brain and a bunch of emergence."
0:40:18 DC: There's a respectable materialist research program here, but it has to involve ultimately turning into the hard problem. All you're gonna get out of physics is ultimately more and more structure and dynamics and functioning and so on. So, for that to turn into an explanation of consciousness, you need to find some way to deflate what needs explaining in the case of consciousness, ultimately turning the hard problem into an easy problem, to a matter of behavior and a matter of functioning. And maybe say, that extra thing that seems to need explaining, that's an illusion. And people like Dan Dennett, I respect greatly, has tried to do this for years, for decades. That's been his research program. At the end of the day, most people look at what Dennett's come up and they say, "Nope, not good enough. [laughter] You haven't explained consciousness." If you can do better, then great.
0:41:03 SC: Whereas, so... To move more in the direction of what you're positively advocating for, at least... I mean, you've always been very careful to positively advocate for not that much. 'Cause this is, as you say, a hard problem. We don't know the answers yet. We don't need to move forward by insisting, this must be the right answer. So you've been open-minded, but you're at least open-minded about this property dualism that you talked about, and that one version of that leads us into panpsychism. So can you explain these two concepts?
0:41:33 DC: Yes, well, I'd say that I've explored a number of different positive views on consciousness. What I haven't done is committed to any of them. I see various different interesting possibilities, each of which has big problems, has big attractions, but also big problems to overcome. So I've tried to explore some of those one at a time. One of the possibilities is panpsychism, the idea that consciousness goes right down to the bottom of the natural order. Panpsychism. "Pan" means all, "psych" means mind, so it's basically saying everything has a mind. Taken literally, it would imply that people have minds, particles have minds, but also tables and numbers have minds.
0:42:16 SC: Sorry, do we have to say have minds? Or can we just get away with saying something like have mental properties as well as physical ones?
0:42:22 DC: Yeah, if that makes you feel better.
0:42:23 SC: It might make me feel a little bit better, yeah.
0:42:25 DC: Have experiences. We can say there's something it's like to be them.
0:42:29 SC: Well, I don't know. I mean, do we want to say an electron has experiences?
0:42:32 DC: Well, I think panpsychism, taken literally, has that consequence. By the way, most panpsychists don't say that tables or rocks or numbers have minds, but typically their biggest commitment is to fundamental physical entities have a mind.
0:42:45 SC: Okay.
0:42:46 DC: So, if you want to say, now there's a weaker view. You might wanna say, actually something it's like to be an electron. An electron doesn't have experiences. It merely has some protoversion of experience, some predecessor of experience. Maybe electrons are protoconscious. Then there's a view called panprotopsychism, that could maybe seem a little bit less insane to you. I mean, the trouble, of course... One of the troubles with panpsychism is it seems very counter-intuitive, because we don't naturally think that electrons have consciousness, and there's not a whole lot of direct evidence in favor of it. On the other hand, you might say, there's also not a whole lot of direct evidence against it. It's not like we've got any experimental evidence that electrons are not conscious.
0:43:28 SC: Well, let me, rather than harp on that, let me just try to figure out what it would mean for electrons to have minds or experiences or consciousness. It certainly can't mean another quantum number in the physical sense, right? They can't have happy electrons and sad electrons. That would change much of particle physics in bad ways. So, is it the... Is it some kind of epiphenomenalism? Do the happiness or sadness, if we wanna call it that, just go along with the electron? What determines what the electron is feeling?
0:44:02 DC: The way I think... The best option for a panpsychist here is, yeah, you don't need a whole bunch of extra new laws of physics for the consciousness at the basis. Rather, it's consciousness that's fundamentally playing the causal role for the physics that we know. I mean, it's a point that's often been made about physics. And it's fundamentally... The science of physics is fundamentally structural or mathematical. Everything is basically explained by how it relates to other things. Maybe quantum mechanics gets messy and everything else in contemporary physics gets even messier. So, let's just start with classical physics that characterizes particles, the positions in space and time, with some mass, with some forces that operate on them. Then what is mass in classical physics? Well, it's this thing, which is subject to the laws of gravitation, and the laws of motion.
0:44:55 DC: And that is involved in forces in a certain way... Nothing here tells us what mass is in itself. Rather, explains mass by the way that particles with mass interact with other particles with mass.
0:45:06 SC: What its role is, yeah.
0:45:07 DC: So it's all a giant structure. And physics does a great job of characterizing this structure. And then you wanna... That raises the question, well, what is... What is the intrinsic nature of mass? Well, one thing someone might say is, "It doesn't need to have an intrinsic nature, it's just a giant relational web," and that's a respectable view, which I think, some people think it doesn't make sense, other people think it makes sense. But here's another possibility...
0:45:31 SC: Structural realism.
0:45:32 DC: Structural realism is what it gets called in the contemporary philosophy of science, and ontological structural realism says, "That's all there is in the world, a giant web of relations."
0:45:43 SC: Right, okay.
0:45:44 DC: But the other possibility, people sometimes speak of epistemological structural realism, what physics tells us is the structure, but there may be some intrinsic natures underlying the structure. And as far I can tell that's a respectable possibility as well, that mass does have an intrinsic nature, that when two things with mass interact, they've got some intrinsic properties that govern that interaction. And, of course, the panpsychist idea is to say, maybe that intrinsic property is consciousness or experience, or maybe proto-experience.
0:46:16 SC: Or mind enthusiasts, yeah.
0:46:17 DC: Yeah. And so, where our mind lies at the bottom... At this bottom level serving as the intrinsic properties that underlie physical structure. If that's a bowl that plays, we don't suddenly need to revise physics. The structure of physics can stay exactly as it was. We're just gonna have some intrinsic properties that ground that structure. Then you might say, "Well, now, how is mind making a difference?" Well, it's not like it's making a difference by suddenly having new laws in the picture for minds, rather it's making a difference by being the thing that grounds the physical web. Any time one particle's mass interacts with another two particles, say attract each other by gravitational force, on this picture it's ultimately gonna be their mental properties doing the work.
0:47:01 SC: Okay, so you... We're not saying... Again, in this picture, which may or may not be right, but we're not saying that the mental properties affect the physical behavior of the electrons. So a physicist, I know some personally, might worry that this isn't saying anything at all, because still everything the electrons do is just governed by the laws of physics, 'cause these mental properties don't affect it. But you're saying that's just the wrong way to ask the question. The kinds of things that are being explained by this positing of a mental character underlying everything are not the behavior of the electrons, but something deeper and something that kind of flowers once you get complex organisms that we recognize as conscious.
0:47:40 DC: And does the experience affect the behavior? In one sense, yes, in another sense, no. I mean, it's certainly true, this is not gonna be so exciting for a current physicist in the... All the current physics can say the same. Physics, with the experience underneath it or without it, I think, it's a good thing.
0:47:56 SC: We have all the excitement we need.
0:47:57 DC: Yeah, if we had to revise physics too, that would be a... It would give rise to all kinds of extra crazy complexities. That said, this is more of an interpretation of current physics and of what's going on in the world underneath current physics, and it's ultimately saying that what is doing the work in physics at the bottom level is these intrinsic properties of mind or consciousness. The fundamental laws, which we think of as laws connecting mass and mass or mass and motion, or whatever, are ultimately gonna be laws connecting little bits of experience in this structure. From the outside, all we see is the structure, and we give it a mathematical description, and we call that the laws of physics, and it's great.
0:48:40 DC: But in reality, what's underlying it... We're used to the idea, what underlies a physical theory may involve more than what actually gets... That we see in experimental results. On this hypothesis, what underlies it in reality is a whole bunch of minds or experiences pushing and pulling each other. Is this wildly speculative? Of course, it is. But is it ruled out by anything we know? Well, I think not. So I think it's in a speculative vein, it's at least a philosophical view to take seriously.
0:49:07 SC: And it must be tempting to look toward quantum mechanics for a place to implement these kinds of ideas.
0:49:12 DC: Yeah, quantum mechanics is, of course, it's a magnet for anyone who wants to find a place for crazy properties of the mind to interact with the... In the physical world, because quantum mechanics is so ill-understood, then it does have suggestive properties that connect, that may seem to connect to observation, or the mind. I would actually not connect, combine quantum mechanics and panpsychism in the most promising role. There are people who connect quantum mechanics and panpsychism and somehow, the right degree of quantum mechanical holism, somehow, you could see how all those individual experiences might add up to a big experience. Lately, though, I've actually been thinking about quantum mechanics in the context of a different kind of view, which is more a kind of dualism with property dualism, with properties of consciousness distinct from properties in physics, but somehow interacting with it.
0:50:11 DC: If you're not gonna be a panpsychist and say consciousness is present at the bottom level of physics, then consciousness has to be somehow... The property of consciousness has to be separate from those other ones, space, time, mass charge, and that raises the question now, how does it interact? Either you say it doesn't, it's epiphenomenal. It does nothing. Well, that's kind of weird, and consciousness has no effect at all in the physical world, or you say, it has an effect on the physical world. And then the question is, how on earth do you reconcile that with physics, which doesn't seem on the face of it to have any room for consciousness to play that role?
0:50:46 DC: And there is of course this one, I would say, age-old idea. It can't be an age-old idea because quantum mechanics has only been around for a century or so. But this one old idea that maybe there's at least one kind of fairly traditional interpretation of quantum mechanics, where minds could play a role in quantum mechanics, mainly... Namely via the process of observation, which collapses the quantum wave function. Of course, it's very controversial, but it is a very traditional picture of quantum mechanics. So there's two kinds of dynamics of the quantum wave function. There's Schrödinger evolution, the normal thing, and there's something weird which happens on measurement. And standard quantum mechanics says, "Make a measurement, the wave function collapses." And that's a different thing from Schrödinger evolution.
0:51:36 DC: Now of course, this immediately raises a million questions like, "What on earth is measurement, and why should that get any special treatment? That's the quantum measurement problem." And that's... Many people run a mile at that point saying, "Oh, I don't want minds to play a role in physics. Let's try something else." And they find themselves in Everett-style many worlds quantum mechanics, or Bohm-style hidden-variables quantum mechanics, or GRW-style collapse quantum mechanics, which doesn't give minds a role. And I think all those programs are great and very interesting. I'm not against them, but I'm also interested in a possibility, which may have been overlooked, which is trying to make rigorous sense of a more face value interpretation of quantum mechanics, where there is something special that takes place upon measurement.
0:52:25 DC: Now, for your average physicist... Well, why can we... It just seems very strange to treat measurements as fundamental, 'cause that would involve treating the mind as fundamental, and that's not something that everyone wants to do. If on the other hand, you're inclined to think there's already reason to think the mind involves something fundamental, and that consciousness is somehow a fundamental element in nature, then that reason to reject the view will not be a good reason to reject the view. And the question for me is just, "Can we actually make rigorous mathematical sense of the idea that once consciousness comes into the picture, that the wave function collapses?"
0:53:02 SC: Is it fair to associate this view with something like idealism where you're putting mind as the first thing that creates reality?
0:53:10 DC: Maybe there's an idealist version of this, but I would actually think of it as a version of property dualism, that is the quantum wave function is real. It's got an existence. It has nothing to do with the mind. The universe has an objective wave function just as it might on, say, an Everett-style view. It's rather there's this aspect of the dynamics of the wave function, which is affected by the mind. And under certain circumstances, physical systems will produce consciousness. Under certain circumstances, that consciousness will collapse the quantum wave function. So it's actually... Descartes thought that the body affects the mind, and the mind affects the body. That was classic interactionist dualism. Think of this as an updated version of Descartes in a property dualist framework. You've got the quantum wave function. You've got some dynamics by which the wave function affects consciousness. You've got some laws.
0:54:03 DC: It might be, say, something like Tononi's integrated information theory, that says when the wave function has enough integrated information, then you get a bit of consciousness. And then you need some other bit of dynamics by which consciousness can affect the wave function. The idea that... I was working on this with Kelvin McQueen, a former student of mine who's now in Philosophy and Physics at Chapman University in California.
0:54:29 DC: And the idea we started working with was, there's something special about consciousness or maybe about the physical correlates of consciousness, so that it resists quantum superposition. Everything in the world, mass and charge, can... Most properties can evolve into quantum superpositions. But maybe there are some special properties that resist quantum superpositions. Maybe they go into superposition for a moment, but then they always collapse back or maybe the moment they're about to superpose, they pick up determinate state. And then the thought was, if that happens, say, consciousness is like that. Consciousness never enters a superposition. The moment brain processes would be such that they would produce a superposition of consciousness, then somehow they collapse into a definite state. And then you might see that as an effect of consciousness on the physical processes in the brain that could in principle give you an effective consciousness in the physical world. And then the question is... But for me, ultimately it... It's a wild... It's a weird and speculative picture, of course, but anyone's theory of consciousness is weird and speculative. For me, the question is...
0:55:34 SC: It's picking up old ideas from people like Wigner...
0:55:36 DC: Absolutely.
0:55:36 SC: And they've dropped out of favor now, but you wanna re-examine them?
0:55:40 DC: Absolutely. So, Wigner in 1961 remarked on the mind-body question as probably the locus classicus for this. People think they find the idea, or hints of the idea at least, in von Neumann. And earlier in the 1970s, this got associated with The Dancing Wu Li Masters and so on. At which point, physicists started running a mile from this idea.
0:56:00 SC: Lost some respectability, yeah.
0:56:01 DC: And I think it has been used in some unfortunate ways, but I just wanna examine this idea, see if we can get it on the table as one of the many alternative interpretations of quantum mechanics, which has upsides and downsides. For me, the question is ultimately, "Can you give it a good coherent mathematical dynamics that works and is consistent with all of our predictions?" If that could be done, then we can take it seriously. Now, I should say that the version Kelvin and I started with does have one rather serious problem with the so-called quantum Zeno effect.
0:56:36 SC: Okay, yeah.
0:56:37 DC: Roughly, the quantum Zeno effect says if you've got some quantities that are constantly being measured, they're always measured, so they never enter into superpositions, then they never change. So if you constantly measure the position of a particle, it'll never move.
0:56:52 SC: I can see where this would be a problem. Yes.
0:56:54 DC: If consciousness is such that it's constantly never entering into a superposition, it's at least as if consciousness is always being measured, which means that consciousness can never change. So for example, if you start out with an early universe with no consciousness, then consciousness will never get a chance to come into existence. The moment there's a little glimmer of consciousness, it's gonna snap back. Only in one tiny, little, low amplitude part of the wave function will there be consciousness. And with probability one, it will snap back to no consciousness. So, consciousness can never evolve. Furthermore, you can never wake up from a nap. [laughter] If you're unconscious, you'll never get to... There'll be little branches that develop consciousness, they'll snap back to unconscious, but probably will be worse.
0:57:38 SC: That sounds like a good world. Now, I like it...
0:57:39 DC: Yeah.
0:57:40 SC: The never waking up from the nap world.
0:57:41 DC: Naps go on forever. It was a small, small problem for the initial simplest version of the theory, which we're now trying to work this into a negative result paper called Zeno Goes To Copenhagen.
0:57:54 SC: Okay.
0:57:55 DC: Bridging the Zeno effect is a problem for a class of interpretations nearby. But then the question is, "Can the... " "Is there a version of this you can make work, that won't suffer from this Zeno problem?" We've been playing around with probabilistic versions and versions where consciousness superposes for a while and collapses back. And I'd say, we haven't exactly solved the problem yet, but I think there's at least an interesting class of interpretations here worth taking seriously if you are inclined to take consciousness seriously. And after all, quantum mechanics is enough of a mess, that...
0:58:29 SC: It's worth trying, yes.
0:58:29 DC: It's not like there's any interpretations that is free of problems. So, if there's something here that A, gives you a perfectly adequate quantum mechanics and B, allows a role for consciousness in the physical world, that would at least be reason to take the view seriously.
0:58:44 SC: And if you are a property dualist, if you believe in mental properties as well as physical properties of stuff, does that have implications for questions like artificial intelligence or consciousness on a computer?
0:58:56 DC: You know, I think it doesn't have immediate implications. Some people think that if you're a property dualist, you should think that computers won't be conscious. To me, that's kind of odd. I mean, we're biological systems who are with brains and somehow, we're conscious. So, why should silicon be any worse off than, say, brains. That almost seems like a weirdly materialist idea to privilege things made of DNA over things made of silicon. Why should that make a difference? I think dualism is just neutral on the question. The kind of property dualism I like, a fairly scientific, naturalistic property dualism with fundamental laws of nature, I think it's gonna come down to, are the properties of matter that get connected to consciousness in our theory of consciousness, are they gonna be more like specific biological properties, or are they gonna be more like computational or informational properties?
0:59:52 DC: If it's something like Tononi's integrated information that gives you consciousness, well, it looks like that could be present, just as much in a silicon system, as in a biological system. So, in the work I've done, at least, I've tried to argue that it's really the computational properties of matter that are relevant to consciousness or the informational properties. If that's the case, then an AI system will be able to do the job just as well. And in principle, we could even replace some neurons one at a time. The biological neurons by silicon, prosthetic neurons. If they work well enough, we'll be left with a functionally identical system. And I would actually argue that that functional identical system is gonna retain the same consciousness throughout. The alternative would be to say that consciousness...
1:00:39 SC: Fades away.
1:00:39 DC: Fades away or disappears. But I think that gives rise to all kinds of problems.
1:00:43 SC: Right. And then... Before, I was asking if I'm sure I'm not a zombie, this leads us to ask if we are sure that we're not a computer simulation, right?
1:00:53 DC: This is one of the great problems of philosophy. Descartes said, "How do we know that there's an external world? How do you know you're not being fooled by an evil demon who's merely producing experiences in you, as of an external reality where all of this is just being generated by the demon?" Now, the simulation idea is a wonderful 21st century version of Descartes, as illustrated by movies like The Matrix. I'm still a fan of the depiction of this movie in The Matrix that really, I think, got quite a lot of this right. How do you know you're not living in a computer simulation? That is, the computer simulation is playing the role of the evil demon, running a model of a world, feeding your brain experiences, when, in fact, you think and... You think you're in an ordinary physical reality, but in fact, you're in this computer-generated reality. And the people who wrote movies like The Matrix, they say, "If this is the case, then you're basically living a life of illusion and deception, and none of it is real."
1:02:00 SC: It's not the real world.
1:02:04 DC: Which is exactly what Descartes thought about the evil demon hypothesis. So I've been thinking about this lately, and I do take the simulation idea seriously. I think there's nothing we know with certainty that rules out the idea we're in a computer simulation. The philosopher Nick Bostrom has actually given a statistical argument that we should take it very seriously, that we are in a simulation. Roughly, the idea is that any sufficiently intelligent population will have the capacity to create lots and lots of computer simulations of whole populations. So, as long as they go ahead and use their abilities and create computer simulations, then most... The majority of beings in the universe will be simulated beings and not unsimulated beings.
1:02:52 DC: And then the thought is, we'll just do the math, do the statistics. 99.9% of beings in the universe are simulated, including a whole bunch who are just like me. What are the odds that I'm one of lucky ones at Ground Zero, the 0.1%? So you might say, "I should be 999 out of... 99.9% confident that I am a simulated being." Now, you can raise issues with the reasoning here and there... One question is, "Would a simulated being be conscious?" Some people might say, "No. They are not conscious. They'll be zombies." If so, the fact that I'm conscious shows that I'm not in a simulation.
1:03:28 SC: You think you're conscious. Go ahead.
1:03:30 DC: But at least, that's not gonna help me, 'cause I'm at least on record as thinking that a simulated system, an AI system, could be just as conscious as a biological system. So I think all those beings in computer simulations may well be conscious. Maybe it's only 50%, okay. Even if it's only 50% likely they're conscious, then that still should give a big dose of probability to the hypothesis that I'm in a simulation. So that's not gonna help. So I think it's actually possible that I do... I can't rule out that we're in a simulation. Where I wanna get off the boat, though, is this idea that simulations are illusions, that simulations aren't real. I think we could be. We could be in a world which is a simulation.
1:04:12 DC: But if so, that doesn't mean that there's no tables and chairs in the world around us, there's no matter, it's all an illusion. I think what we should say is instead, "Yeah. We're in a world with tables and chairs and matter, and we've made a... If we're in a simulation, then if we discover we're in a simulation, we'll have made a surprising discovery about what tables and chairs are made of. They are ultimately made of, say, information and computational processes at the next level down, which may ultimately be realized in processes in the next universe up." And importantly, it's all still real. It's not like, as Descartes thought, a world where nothing around you exists. Yes, the world around me exists, it just has a surprising nature. And this actually connects nicely to the ideas about structural realism we were talking about before, that really, physics tells you about the structure of the world.
1:05:06 DC: It doesn't tell you, ultimately, about what that structure is made up of. If we're in a simulation, it turns out the structure is exactly... The mathematical structure of our reality may be exactly as physics says, it's just that it's all implemented or realized on a computer in the next universe up. So yeah, the structure of physics is real, so the electrons are still real. They're just ultimately electrons made of bits, made of whatever is fundamental in the next universe up.
1:05:32 SC: You said, "If we ever find out," is there any way we would ever find out?
1:05:36 DC: It depends how well the simulation is made, doesn't it?
[laughter]
1:05:39 DC: If it's like the one in The Matrix, where they gave us some potential ways out, like the red pill...
1:05:45 SC: That's very buggy code, yeah.
1:05:46 DC: And yeah, that's a dumb way to build a simulation, if you ask me, unless you want people to escape. If it's a perfect simulation, we may never find out. And because of that, I think if we're not in a simulation, we may never be able to prove that we're not in a simulation. Because in a perfect simulation, any evidence, any proof we can get, could be simulated by being through the same experiences. So I think we'll never get... We'll never know for sure the negative claim, we're not in a simulation. It could be that if we are in a simulation, we could get some very decisive evidence for that. If the simulators suddenly move the Moon around in the sky and write big signals and we look at our genetic code and we find messages written in there, saying, "Hey, losers, you're in a simulation," then we'd take that to be pretty strong.
1:06:37 SC: There is the pre-existing hypothesis of God having done all this. It's not that different, God doing these things from our programmers doing these things.
1:06:41 DC: Yeah, yeah, indeed. Yeah. Exactly, and people... And the question of evidence arises for God as well. I mean, we could, in principle, get decisive evidence that there is a God. It's very hard to get decisive evidence that there's not a God.
1:06:55 SC: And you think that it's realistic to think that we can at least imagine simulating... Doing simulations that are so good that a multiplicity of intelligent, conscious creatures exist there in our simulations?
1:07:10 DC: I think so, in principle. I mean, I don't see... It's just a matter, really, of computer power and once you know... Once we know the laws of physics well enough, presumably we could set up a universe with boundary conditions, which are allowable boundary conditions for a universe like us, set up the differential equation simulators on our... Maybe there will need to be a quantum computer to be... Especially, to get the quantum mechanics right. But then, I don't see why in principle you couldn't get... Maybe it'd be hard to get a universe as complex as our universe.
1:07:43 SC: Oh, yeah, it would have to be less. Every...
1:07:45 DC: Every universe is finite.
[overlapping conversation]
1:07:45 SC: Has to be less, right?
1:07:46 DC: Yeah. If our universe is finite, and it has, say, one billion units of complexity, then we can't simulate something with one billion units of complexity, but maybe something with one million units of complexity, just not to tax the universe too much. And of course, if we are in the enormous universe that we seem to be in now, with enormous resources, that seems to... Probably it'll have resources to be able to simulate some pretty complicated universes without too much trouble, in principle.
1:08:17 SC: These kinds of scenarios, whether it's Descartes or simulations or whatever, can we... Or God creating the universe. Can we apply some kind of anthropic reasoning here and ask, if this were the case, would that have some implications for what the universe would look like? And then ask, it does or does not look like that? Like I've certainly said, "If you want to depend on... If you wanna argue that the fine-tuning in the universe of certain fundamental physics parameters that therefore allows for the existence of life is evidence for the existence of God, then you should be consistent in that argument and point out that there are other things about our universe that look wildly unlike you would expect if the point of the universe existing was for our life to exist." Can we say similar things about the purported simulators?
1:09:07 DC: Yeah, you might worry that most simulations are gonna have certain properties and that our universe does or doesn't have those properties. I mean, one thing about our universe is it's enormous, it seems to be enormously big.
1:09:19 SC: It's very big.
[laughter]
1:09:20 DC: It's so complicated. Why would you waste your time...
1:09:21 SC: Yeah. It does seem...
1:09:23 DC: If you're gonna be making simulations, you might think most simulations are gonna be a whole lot smaller and local for many purposes. Why would the simulators be generating a universe quite as big and as complex as we are? You might think your average... Of course, whatever you do make in a universe like that... Like us, it's gonna involve a whole bunch of people making simulations of universes which are simpler in turn, universes which are simpler in turn, more and more of those ever simpler universes. So you might think that actually, most universes are gonna be very, very simple.
1:09:55 SC: Exactly, that's what I would think.
1:09:56 DC: Yeah, I think I might have heard Sean Carroll making it...
1:09:58 SC: I think I made this point. Yes, right.
1:10:01 DC: In the past, at some point. And then, so yeah, the very fact that we're in a complicated universe is gonna be at least some reason to disfavor the simulation hypothesis.
1:10:14 SC: Now, of course... So to go... There's a little bit of back and forth here. One could respond to that by saying, "Well, we don't know the universe is big. We see galaxies in the sky, but really, we see photons that have recently reached us. We don't see the galaxies themselves. Maybe there's nothing more than a few million light years away, and it's all just set up to make us think that." But then, we're in some sort of skeptical nightmare and really, do we have to do anything at all?
1:10:36 DC: Yeah, semi-skepticism. Maybe it's just like, everything out to the Solar System. We've actually sent probes out to the...
1:10:42 SC: We think we have.
1:10:43 DC: To other planets and so on. So, the Earth... I don't know... I'm pretty... It's gotta be hard just to simulate like New York City, because there's so many people leaving New York City all the time and coming back and the news from the outside... At the very least, you're gonna have to have a pretty detailed simulation of the rest of the Earth to keep all of the newspapers...
1:11:05 SC: Right. It'll be incredible.
1:11:05 DC: And TV and everything going. But once you move outside the Earth, it gets at least a bit easier. Maybe it's like... Maybe the Moon is just a... I mean, you're at least gonna need a fairly detailed simulation of the Moon, because of the role it plays in our lives. But maybe beyond a certain point, you can run a very cheap simulation, maybe beyond Pluto. We've just got a very cheap simulation of the rest of the universe. Every now and then, maybe the simulators say, "Hah, they've just made a new discovery. They've discovered a new form of...
1:11:36 SC: Planet 9.
1:11:37 DC: "A new way to monitor stuff. They're looking a little bit closer at these exo planets." And maybe they scramble, and they come up with some new data for us. But maybe that's gonna turn out to be much easier to run a cheap simulation.
1:11:49 SC: But doesn't even saying these words make you think that maybe this is not the world we live in?
1:11:54 DC: That this is not the world...
1:11:55 SC: That these are all kind of arguments against living in a simulation. Just our universe does look way bigger than it does. You can imagine things the simulators could do, but why are they going to all this trouble?
[laughter]
1:12:06 DC: I think it's quite possible, though, that the... I don't know whether our universe is infinite. It's quite possible that the basic universe is infinite. And maybe in the next universe up, they have infinite resources. It turns out that simulating a large finite universe is no problem at all. In fact, they can simulate infinite universes, because in an infinite universe, you'll have the resources to simulate an infinite number of infinite universes without problems. As long as we don't fall into the trap of thinking that the next universe up has to be just like ours, then I think all bets are off.
1:12:37 SC: And are there ethical implications for this, or are there implications of this idea for how we should think about ethics? Number one, should we think about ethics in our world differently if we're simulations, and should we worry about making simulations with conscious creatures and treating them well?
1:12:52 DC: I think that the ethics of our world needn't be affected drastically by this any more than it has to be affected drastically by the theistic hypothesis that we're in a universe with a God. We're still conscious beings living our lives, treat other people well, make sure they have, by and large, positive conscious experiences rather than negative ones. Maybe we need to think about the impact of our actions on the people in the next universe up. But since we don't really know what those... What that impact is, it's... You might say there's a... Self-interest comes into this. After all, if we want to live on religious hypotheses, people modify their behavior greatly in order they can live forever, we might want to make sure the simulators keep us around.
1:13:36 SC: I mean, it does open the possibility of an afterlife, right, if we're in a simulation?
1:13:40 DC: Yeah. Maybe quite naturalist... Just as the simulation hypothesis here has a very naturalistic version of God, it could have a naturalistic version of an afterlife. We already see in TV shows like Black Mirror that people come to the end of their lives, and they upload into a simulation and keep going, keep going that way, likewise.
1:13:57 SC: Have you read Iain Banks's Culture novels?
1:14:00 DC: I haven't, actually. I should.
1:14:00 SC: Oh, okay. You certainly should, because part of it... It's a small part except for one novel where it plays a major role, but there's this idea that, yeah, they do simulations all the time. There are consciousnesses and agents in the simulations. And therefore, the intergalactic organization has passed laws. You can't end the simulations, 'cause that would be genocide. But then there's certain very, very bad civilizations that actually turn them into hells where they torture the AIs that didn't behave in the right way.
1:14:31 DC: So I think the ethical questions absolutely get a grip once we start thinking about creating our own simulations. And I'm sure any number of people are gonna be tempted just once we've got the capacity to start up a copy of SIM universe running on our iPhone, and maybe get a thousand copies up and running and see what happens overnight, run the entire history of this universe, gather the statistics. Could be useful for scientific purposes, could be useful for marketing purposes, predicting what products are gonna do well. Could just be useful for...
1:15:00 SC: I didn't even go there, but yes.
1:15:03 DC: Oh, for sure. You wanna test your different products and see which iPhone is gonna sell the best? For sure. But yeah, I think the ethical issues really are enormous. You're gonna be creating universes with billions of billions, trillions maybe, infinitely many people, and each of which is living a life as a conscious being. And if they are lives of suffering, then we've done something horrific. If they are lives of pleasure, then maybe we've done something good. But yeah, people talk about, "Did God create when creating us, the best of all possible worlds? Why was there so much evil?"
1:15:40 SC: Evil and suffering.
1:15:44 DC: Well, maybe God created many different universes. All the ones that had a net balance of positive experiences over negative experiences turned out by creating all those worlds, then somehow there was a net positive in creating them. Well, we're gonna face questions like this, too. Maybe if you want to create experimental worlds where there's suffering, you can only do that when there's a net balance of positive experiences in your simulations to make up for it. Even then, someone's gonna say, "Well, you could have created an even better world with a bit less suffering and a bit more pleasure or fulfillment or satisfaction. Certainly, you did something immoral by creating this world." I think we're gonna have to face all those questions. They're not gonna have easy answers.
1:16:29 SC: Okay, speaking of easy answers. Two last questions for you, David. One is you're working on a book. I know it's well in advance, but we can prime our audience to be ready. Do you wanna say anything about what the book will be?
1:16:39 DC: Sure, yeah. The focus of the book is very much this set of issues we've been talking about for the last few minutes about simulations and about virtual reality. My working title, probably won't be the final title, but the working title is Reality 2.0: Artificial Worlds and the Great Problems of Philosophy. And it's all about exploring philosophical problems like our knowledge of the external world, and the nature of reality through this, through the idea of artificial or virtual reality. So, are we in a matrix? That's one of them. But also, I really want to develop my own philosophical line, which is virtual reality, simulated reality is a genuine kind of reality. It's not a fake or a second-class reality.
1:17:22 SC: Right.
1:17:23 DC: It's a perfectly respectable way for a world to be. And I think this is relevant, not just for way out speculation science fiction scenarios, like we're living in a simulation, but very practical scenarios, like the virtual reality technology, that's being developed today. Things like the Oculus Rift, where people enter into virtual worlds and start spending more and more of their time there. It's easy to imagine 50 or 100 years in the future, we're gonna... We're all gonna be spending a lot of our time in these virtual worlds and the question's gonna arise, is that, can you actually lead a meaningful life there?
1:17:58 SC: Yeah. And is this... Is this... I'm not even sure this is a meaningful question. Is this aimed at a popular audience, or professional philosophers, or both?
1:18:04 DC: I would say both, but I'm absolutely trying to make it as accessible as possible so anyone can read this book.
1:18:11 SC: You have tenure so you can do that.
1:18:13 DC: Yeah. I hope they won't revoke it. But yeah, it's meant to be both introducing a whole lot of philosophical ideas, but also putting forward a substantive philosophical view of my own. But roughly, this view is that virtual reality is a first-class reality across all of these domains. I think it has bearing on the great philosophical problems. How do we know there's an external world, Descartes' problem. It has bearings on the question of the relationship between mind and body and has a bearing on these ethical questions about what makes a meaningful and valuable existence, or life of the kind. So I think it's actually a way to come at some of the deepest philosophical problems just through this lens of just as thinking about artificial intelligence turns out to shed light on many questions about the human mind, I think thinking about artificial realities turns out to shed light on all kinds of questions about the actual natural reality we find ourselves in. So that's what I'm trying to do.
1:19:09 SC: And the last question is Tom Stoppard. He's one of my favorite living playwrights, playwrights, period. He wrote a play called The Hard Problem. How does it feel to have a phrase you coined become the title of a Tom Stoppard play? [laughter]
1:19:22 DC: Oh, I was... I was very pleased. I think, actually, it was my friend Dan Dennett, who sent me the email, he read this in an article, and said, "Hey, there's a Tom Stoppard play coming up, called The Hard Problem." I said, "Great. Has this got something to do with consciousness?" But it turns out it does and I've actually gotten to know Tom Stoppard a little, as a result of this process. He put on... It had its American opening in Philadelphia a couple of years ago, maybe about a year ago, at the Wilma Theater in Philadelphia, and I went down there and did an event with Tom, where the two of us were talking on stage about the hard problem of consciousness.
1:20:00 DC: The play is very interesting. I'm not convinced it's actually about consciousness at its root, it's about a much broader set of questions, some of which involve consciousness, some of which involve God, some of which involve value. And in fact, in this discussion, it sort of emerged that it's not that... It seemed to be that the problem that was really generating things for Tom was not the problem of consciousness, but the problem of value. How can you have the experience of some things being better than others, of life being meaningful, of sorrow versus happiness; of course, that's very deeply connected to consciousness.
1:20:37 SC: Right.
1:20:37 DC: But I suggested to him that, really, his hard problem is the problem of value. And he agreed. He said, "Yes, thank you. I think maybe that's what's really moving me, the hard problem of value."
1:20:45 SC: It's another famously hard problem, but okay, it's not the hard problem. But they're all mixed up. I mean, you gotta write the best play, right. That's... Even when I'm an advisor for Hollywood movies about science, the goal is to make the best movie, not to be the best science documentary.
1:21:00 DC: But the play is about to open, actually, here in New York at the Lincoln Center.
1:21:05 SC: Oh, alright.
1:21:05 DC: So I have another round of all of this coming up. Actually, the last... I don't wanna give away any spoilers about the play. But at some point, they mention the main character goes to work with a professor at NYU, whose ideas are said to be indemonstrable. And various people have asked me whether that's me. I'm actually fairly confident that it's not. I think it's my colleague, Tom Nagel.
1:21:27 SC: You think it's Tom Nagel? Okay. Right.
1:21:27 DC: Tom Nagel, who wrote What Is It Like To Be A Bat.
1:21:29 SC: Yes.
1:21:30 DC: And he's the professor at NYU.
1:21:33 SC: But simply the label of being a philosopher whose ideas have been called non-demonstrable, doesn't really narrow things down too much.
1:21:39 DC: Oh, no, I was at the... Yeah, I was talking about this with my colleague, Ned Block. There's three of us at NYU who work on the philosophy of consciousness and we decided that the philosopher in question surely couldn't be either of us, because our ideas are demonstrable.
1:21:52 SC: Absolutely right. Alright, David Chalmers, thanks so much for being on the podcast.
1:21:55 DC: Thanks. It's been a pleasure.
[music]
While I’m more along the lines of Chalmers here, purely because consciousness is so subjectively (ha) persuasive, this raises the question of whether certain “why” questions (“Why the universe?”, “Why consciousness?,” “Why these universal laws?”, etc.) run into a wall at a certain point. It seems like humans are hardwired for goal-seeking (explanation-seeking?) behavior and that these questions will inevitably arise, however they may just not have any suitable answers.
For my part, though, I would suggest that consciousness (and its various offshoots, including what religious folks might attribute to spiritual forces) does not require that we posit any additional substance per se. These are emergent from “underlying” physical/biological systems but are of such complexity that they behave as if they are independent from said systems. In the aggregate, cultures arises from the collection of consciousnesses, and again at a certain scale of complexity take on qualities AS IF they are independent of those underlying consciousnesses. This is not to say that any of these things are “illusions”–they are as real as anything else that requires consideration, and in fact reductionist attempts to diminish these phenomena fall short.
I think it is important that Sean pointed out that any proposed pan-psychic property of fundamental particles can not be an additional quantum number. Nor can it be a new particle / quantum field. Being a layman myself I have to say that this is the most important message I found in Sean Carroll’s talks and books that I had not found anywhere else in about 20 years of consuming popular science books and magazines: that the laws of physics underlying our everyday life are completely known.
In this talk it is from minute 29 to minute 36 https://youtu.be/bcqd3Q7X_1A
and here are the slides from the talk about turning Feynman diagrams by 90 degrees / how we find new stuff in particle accelerators and the boundaries of that search / where we have already looked / the “known unknowns”, and what it means (or rather does not mean) for our everyday life https://de.slideshare.net/mobile/seanmcarroll/purpose-and-the-universe slides 16 to 21
I can’t remember in which of his talks Sean mentions the problems that arise from additional quantum numbers. Would have been nice if he elaborated a bit on that in the podcast, but I guess I’ll have to rewatch some more of the older stuff.
Anyway: this podcast goes right to my favorites list: it is as clear and as compressed as it can get! Thanks a lot! To me it seems the hard problem is to either accept that weak emergence / illusion is the way to go, or else believe in strong emergence / the hard problem, but never be able to get any evidence for it… – it really is harshI think it is important that Sean pointed out that any proposed pan-psychic property of fundamental particles can not be an additional quantum number. Being a layman myself I have to say that this is the most important message I found in Sean Carroll’s talks and books that I had not found anywhere else in about 20 years of consuming popular science books and magazines etc: the laws of physics underlying our everyday life are completely known.
In this talk it is from minute 29 to minute 36 https://youtu.be/bcqd3Q7X_1A
and here are the slides from the talk about turning Feynman diagrams by 90 degrees / how we find new stuff in particle accelerators and the boundaries of that search / where we have already looked / the “known unknowns”, and what it means (or rather does not mean) for our everyday life https://de.slideshare.net/mobile/seanmcarroll/purpose-and-the-universe slides 16 to 21
I can’t remember in which of his talks Sean mentions the problems that arise from additional quantum numbers. Would have been nice if he elaborated a bit on that in the podcast, but I guess I’ll have to rewatch some more of the older stuff.
Anyway: this podcast goes right to my favorites list: it is as clear and as compressed as it can get! Thanks a lot! – What are my conclusions? Either stay with Sean Carroll ( and Sabine Hossenfelder iirc) on the weak emergence / illusion spectrum and feel uneasy about it, or change to the strong emergence / hard problem camp, that pretty much seems conceptually locked in a position where it never can get any evidence (- as shown by the zombie-problem).
Sorry for that double-post, when I was trying to edit the ending it seems I inserted the whole thing again…
I have tried to think about the dualist approach some more, and it makes no sense whatsoever. If the psyche lived on a separate plane, and all interactions would actually happen on that psychic level, then the whole materialist world would be epiphenomenal. The only way to explore that psychic world would be via my own psyche / consciousness. Those who claim to have gained psychic insights (either via meditation or psychoactive drugs, or by claiming to be a medium for ghosts or channeling or whatever) would have to provide useful insights, preferable non-local stuff (- assuming that the psyche was able to partake in non-local phenomena): like telling the future, looking instantly at far away places, (not even mentioning telepathy or telekinesis) etc. – I’m pretty sure that the predictive power of those proclaimed “insights” is zero. Even the predictive power or usefulness of pure thought without experimental evidence is already limited – as shown in philosophy and theoretical physics (- hence repeatable experiments and controlled conditions are needed to distinguish between models), which imho is also shown in the fruitless discussion about the hard problem and the zombies in this podcast. If the psyche was more fundamental than the body / material world, I’d expect there to be telepathy etc.; even if as proposed it was the part doing the collapsing of the wave function / choosing between possibilities / multiverses, I’d expect to get more sixes while rolling the dice etc.
If either the body or the psyche is “epiphenomenal” then the psyche is the better candidate, and if all is just one big pattern that can’t be separated, then there really is no difference between the zombie and the human, and then it all comes down to flavor or belief.
I always wonder why people bow away from physics as soon as it tends to get a bit more complex than we can currently understand. It is a bit like looking at a closeup of a holographic surface or a piece of compiled computer code – you can barely see anything useful or “understandable” at first sight, but that does not mean there is nothing, just that we don’t quite get it yet as it clearly is a matter of perspective and perception.
I’d really love to hear Sean Carroll talk to Joscha Bach about this topic, another brilliant researcher with a sparkling mind who approaches these questions from a more general computational corner. That would be super interesting.
Thanks alot for this great podcast, all the best!
The Simulation Hypothesis seems to built on assumptions that either aren’t true or are unlikely to be true. Also, people who push this hypothesis often seem to argue that the simulation is basically identical to the physical world, so that would mean it’s not a simulation at all. Just because something is created on another plane, by another civilization, or is calculated by a computer doesn’t mean it’s not a part of the physical universe. I propose that if the simulation is identical to the physical world then it is not a simulation at all.
Also, Sean, I’d appreciate it if you unblocked me on Twitter (@DraftHobbyist). I’m not completely sure why you ever blocked me, easily over a year ago now. It could have something to do with me being a Men’s Rights Activist if you possibly jumped to the conclusion that I was a misogynist because of that or something. Not sure. Anyways, it’s been disappointing being blocked by you for what was seemingly for no reason.
Pingback: Sean Carroll's Mindscape Podcast: David Chalmers on Consciousness, the Hard Problem, and Living in a Simulation | 3 Quarks Daily
The panpsychist version of phenomenal properties playing the role of dynamics is a bit hard to get into but it is quite simple if you overcome the current conceptualizing of what “atoms” and “molecules” are (basically that they are “zombies” so all physicists have a “behaviorist” conception of matter/energy!).
Lets say you were a super-conscious alien who believed that people were unconscious zombies/automata (liek we do for fundamental physical things) and observed their behaviour during summer. They will see in an economic law of demand, that water consumption increases. The alien could provide a structural explanation. Demand increases due to dehydration which makes some neurons of those automata exciting and this causes compels them to head to the store and exchange an paper with an amount on it. She would think that no consciousness or experience is needed for this and that structure is all there is. Yet it is the experience of thirst that makes this causal regularity possible. A person with damage in that part of the brain that abolishes thirst (or pain or anything) would need to be reminded or even forced to drink water.
This is similar to the physical laws. It is basically an “economics” of much much simpler conscious-or proto-conscious-agents, so the EM forces could be a proto-quale that compels these proto-agents to enact these regularities (a bit exotic for us, but ultumately i think even our senses can be traced back to these simple qualia). So the phenomenality-or protophenomenality-is always the thing that plays the causal role and this is what makes the mathematical structure possible observed in physical experiments. A bit hard to swallow that basically the universe is alive in some sense, but after thinking about it for a while it looks a bit less crazy. Ofcourse this view has another major problem which is the “combination” problem (how many minds make one mind) another serious problem altogether.
Sean Carrol is one of my favorite voices in science, and Chalmers one of my favorite voices in philosophy of mind/consciousness.
I still don’t think anything new was gleaned from this meeting of the minds, and it did seem like Sean didn’t confront or contradict much of what Chalmers relayed, which was pretty much just a general synopsis of his viewpoint.
I was hoping to see more where the two of them could build a consensus, at least a consensus on what is truly unknown and perplexing, unique when it comes to consciousness.
Naturalism in its current state, limited by the volume of data produced by orthodox science, is truly a non-satisfactory explanation for consciousness, at least where the boundary of naturalism in regards to consciousness was decently explained by Chalmers.
I believe what Chalmers is saying is that if there is only monism/materialism/physics as a basis for consciousness, then the model should explain why there appears to be the illusion of Cartesian dualism.
Subjective experience almost by definition is the experience of an other world, the outside world experienced by an inside world, not that just can think, but can experience visceral and complex physical sensations which are pure, raw experiences.
From the subjective experience – there is only dualism and no matter what dualism seems to creep up in new forms with each new model of consciousness that comes out.
Thanks to the both of you!
Full disclosure: I do not stand to profit from this in any way, shape or form, but, somehow, I feel obligated to report my almost complete agreement with Sean’s line of reasoning from the start. Having read “The Big Picture”, and even in the act of reading it, I remember agreeing with him somewhere around 99.9% of the time (somehow 100% sounds a bit sycophantic). And so, there it is — I tend to agree with him on quite a lot!
Do not know if he would agree with the following statements, but…
It seems to me that the “hard problem” of consciousness is not as hard as may seem. No doubt it is not an easy problem, by any means, but I feel perfectly satisfied with the emergent phenomena explanation. It just seems perfectly reasonable to me that the effect of having 100 billion neurons and 10 trillion connections between them (I hope I have the numbers right!) would inevitably lead to something like consciousness, given certain evolutionary pressures… OK, I can understand the complaint that “emergent properties” are a bit “magical,” as the mechanism doesn’t specify precisely the method of incorporation, but neither do we know precisely how a bunch of atoms becomes a stool, or a table, or any other abstract object that requires understanding of context to conjure up. Yet, no one complains that understanding abstractions like stools and tables is a “hard problem.” Somehow, we are quite willing to accept that!
And one more point.
How do we feel about consciousness of animals? Are they fully conscious, or are they in some zombie state? We can’t really ask them… And yet, I have a feeling that some of them are as conscious as can be, as conscious as you and I, although maybe not to same degree — dogs, for instance, have cognitive abilities of a three-year-old, or a four-year-old. But they are still conscious, are they not? And if they are, then consciousness is not some mysterious meta-state, but indeed, an emergent phenomena of a sufficiently complex collection of neurons arranged in some specific configuration by evolutionary processes over millions of years.
I got lost when you agreed that beings without “hard” consciousness would be zombies indistinguishable from beings with it. The fact that you called them zombies and defined hard consciousness as imparting self awareness means that beings without it would behave quite differently. Why then did you agree that telling them apart from “fully conscious” beings would be so hard? (They’re zombies!)
I think these simple questions need to be answered before we start getting into “consciousness is not result of physical brain and the chemistry” i.e. the hard problem.
1. When parts of brain are damaged or removed specific aspects if consciousness/awareness/perceptions change or even gone
2. Alcohol and/or drugs that alter the brain chemistry alter the consciousness/awareness/perceptions
Listening to the discussion about panpsychism and whether experience changes how fundamental particles or atoms behave, I thought of Lee Smolin’s statement in his recent book “A Singular Universe”:
“Repeatable laws only arise on intermediate scales by coarse graining, which forgets information that makes events unique and allows them to be modeled as simple classes which come in vast numbers of instances. Hence the Newtonian paradigm works only on intermediate scales.”
My take: while there is “less” experience incarnate in, say, a hydrogen atom, the modicum that is present does indeed have a causal role to play. But statistically speaking, large numbers of atoms behave in a very law-like way. As experiential capacity increases correlate with organizational complexity, it becomes more and more difficult to predict the behavior of those physical entities (living organisms, human beings, etc.).
https://matthewsegall.files.wordpress.com/2018/11/physics-of-the-world-soul-third-edition-1.pdf
I have to believe that if there were an exact copy of Sean Carroll, Sean #2 would be just as conscious as Sean #1. How could it be otherwise?
There are other moral/ethical and sociological/psychological reasons why the consciousness and it’s connection to physical brain and electro-chemistry stay elusive and in the realm of “Hard Problem”:
1. The (obvious and unquestionable) ethical reason is that we do not do active, invasive and sometimes damaging experiments, NOR should we do experiments with human brains to explore the connections between consciousness and brain. We can only do indirect observations of this by observing effects of brain damage or hearing experiences of persons who may have taken drugs or alcohol by self volition. Until recently this was the case. However, the recent non-invasive techniques like functional MRI etc are changing this rapidly – which may render the connection of consciousness/mind to brain less mysterious.
2. The sociological/psychological reason is that historically most people WANT there to be some extra/magical stuff that they feel should be there to consciousness/mind. Absence of this extra/magical explanation, they feel somehow diminishes the (especially their own) consciousness/mind. This is sort of like Daniel Dennett’s description of “belief in belief”. As the humanity gets more scientifically learned more and more people (sometimes begrudgingly) are starting to shed that need for magic/extra.
You command a truly impressive breadth of knowledge, Sean. Good interview. I wonder whether you might clarify why you find it plausible that you are a zombie? You started making an argument based on the notion that a real zombie-you would likewise say they are conscious. But I am not sure why you find that thought experiment more persuasive than your own personal experience of actually having an inner life, a conscious mind, not just in a possible world but in the real world.
Btw, if you haven’t read it, I think you would enjoy Steven Stitch’s classic “From Folk Psychology to Cognitive Science,” a wonderfully lucid takedown of the common notion of belief. Belief is very hard to pin down, but also very hard to do without.
We are carbon based life forms, and at some point, carbon became aware that it was carbon. All the other elements in our human bodies became aware of their own existence. Materialism holds that there is nothing more to us than these elements, and regardless of how they are organized in our brains, they are still material, and insufficient to account for the existence of consciousness. The hard problem boils down to how do we account for consciousness when the only source of our existence are these entities? For scientists who see pansychism as a possible answer, the harder problem is what is the origin of panpsychism? Might it be possible that consciousness exists because the universe would have no meaning without those who observe and marvel?
Sean, your podcast just keeps getting better. I enjoyed the chat Chalmers had with Sam Harris on the Waking Up podcast some time ago and this one was even better.
I would dearly love to see you get David Deutsch on some time. I would up my donations just to hear that. I think you guys could have a truly fascinating conversation.
Keep up the great work!
Just a P.S. to my former post. An analogy to my last point is the game of catch. It requires a pitcher and a catcher. The interesting thing about the analogy is that the catcher is, alternately, the pitcher and the pitcher is the catcher. The game has no meaning without both.
I’m not sure why you didn’t more fully consider the possibility that consciousness is a pattern of relationships among firing neurons.
You did discuss integrated information theory.
But, for some unstated reason, seemed to dismiss it as unsatisfying.
Chalmers dismisses “emergence”.
But life is different from non-life as the result of the emergence of a new pattern of causation (See Rosen, Life Itself)
So why not view consciousness as the result of a new (strange loop) pattern of causal relationships among mental events — thoughts causing thoughts about thoughts.
(Necessarily subjective and internal because my “thoughts” can’t cause your thoughts to be about my thoughts.)
I would argue that Hofstadter had it right (in “I am a strange loop”)
Seems to me that definitions are rather important here. If we define consciousness in terms of subjective experience, e.g. qualia, we’re talking about the processing of sensory information (what we are conscious *of*), whether external (exteroception) or internal (interoception). This implies sensory apparatus and processing apparatus, e.g. a nervous system. Given the sophistication and complexity of our own nervous systems, we might speculate that apparent richness of consciousness corresponds to the sophistication and complexity of the system that supports it – and looking around the animal kingdom, this does seem to be the case, as far as we can judge.
The problem I have with the philosophical zombie is that seems to presuppose some kind of dualism. If consciousness is a function of certain brain processes, as neuroscience suggests, then – even if it is epiphenomenal – to remove consciousness means removing the brain processes that produce or support it, which would make the PZ measurably different from a conscious person. However, if consciousness is not a function of brain processes, then we should not expect it to be influenced by the manipulation of brain processes – yet it is.
One issue here is the apparent reification of consciousness. We tend to abstract the concept of ‘what it is like’ to be a particular system (human, bat, etc.), give it the name ‘consciousness’, and then treat it as something independent in its own right, perhaps because we see ourselves as ‘conscious entities’ that are associated with, or inhabit, our bodies, rather than being functions of, or processes in, our bodies.
Tononi’s ‘Integrated Information’ theory and its ‘phi’ measure seems promising, but if you can produce very high phi values with relatively simple arrays of logic gates – as Scott Aaronson says, ” it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.” This suggests that a high degree of integrated information is necessary but not sufficient; and looking at the brain, consciousness only occurs if certain kinds of information are integrated in certain ways.
Ultimately, it seems that the subjective-objective dichotomy must make an objective explanation of subjective experience problematic. We could, potentially, elucidate all the objective correlates of consciousness, and produce a mapping so that we can infer detailed experience from brain activity; or even identify all the necessary functions or processes so that we can construct a conscious system… But if we knew all the necessary parts, and we knew the functions of those parts, and how they interacted, would that ‘explain’ subjective experience? Inevitably, the subjective experience of other individuals or systems is only available via simile & analogy, i.e. via mappings to common (shared or similar) objective experiences, with the assumption that common objective experiences produce similar subjective experiences – not always the case. We can examine someone’s physiology or brain function to discover that they have red-green colour-blindness, and we can produce images that simulate red-green colour-blindness, and so we can tell they have different experiences, but we still can’t know what it is like for them.