It's become increasingly clear that the Turing Test -- determining whether human interlocutors can tell whether a conversation is being carried out by a human or a machine -- is not a good way to think about consciousness. Modern LLMs can mimic human conversation with extraordinary verisimilitude, but most people would not judge them to be conscious. What would it take? Is it even possible for a computer program to achieve consciousness, or must consciousness be fundamentally "meat-based"? Philosopher Ned Block has long argued that consciousness involves something more than simply the "functional" aspects of inputs and outputs. "Can Only Meat Machines Be Conscious?"
Support Mindscape on Patreon.
Ned Block received his Ph.D. in philosophy from Harvard University. He is currently Silver Professor in the Department of Philosophy at New York University, with secondary appointments in Psychology and Neural Science. He is also co-director of the Center for Mind, Brain, and Consciousness. He is Past President of the Society for Philosophy and Psychology and was elected a Fellow of the American Academy of Arts & Sciences.
Click to Show Episode Transcript
0:00:01.1 Sean Carroll: Hello everyone, and welcome to the Mindscape Podcast. I'm your host, Sean Carroll. One of the kinds of questions that I get a lot, whether in Ask Me Anything episodes or just more generally, is, can you tell me something that you've changed your mind about? Of course I've changed my mind about lots of things. I try not to be too dogmatic or stuck, but I actually struggle to answer that question because there's some trivial examples where I changed my mind because we got better data or I got better information. Right? I've changed my mind about the acceleration of the universe. I used to think it was decelerating. In 1998, we found it's accelerating, and I instantly changed my mind about that. Right now, I think the best candidate for what's causing that acceleration is the cosmological constant. I'm open to changing my mind if we get better data that says it's something dynamical rather than a constant vacuum energy, but more vague questions, philosophical, cultural, political, aesthetic questions, I have trouble pinpointing when I changed my mind, even though I certainly did it because my process tends to be fairly gradual and I've forgotten what it was that started me down the road of changing my mind.
0:01:08.3 SC: So maybe I did change my mind. But to me, what my opinions are now seem like they must have always been that way, even though I know that's not true. I mention all this because I think I might be in the process of changing my mind about something. Not something like super dramatic about my feelings about how the world works, but something nevertheless, pretty important. The question of what does it mean to be conscious? In other words, what are the requirements for something to be conscious? There's a point of view towards consciousness, which we all know is a complicated subject. We've talked about it here on the podcast many times. There's plenty we don't know about consciousness. And mostly I stick to saying we don't have to change the laws of physics in order to explain consciousness. I still think that, okay? I'm not going to... Not in any danger of changing my mind about that anytime soon, although eventually, who knows? But, okay, even if the world is made of physical stuff doing physical things, when does that stuff doing some processes count as conscious. Right? There's a point of view that really puts the emphasis on kind of an input output mechanism.
0:02:13.7 SC: This would go back to the Turing Test with Alan Turing. Right? Turing suggested that if you had a computer program that could have a conversation with a human and trick them into thinking that it was conscious, then it should count as conscious. What really matters, in other words, is the output of the computation going on. And this grew into a view called computational functionalism. What matters is the function of various things going on in and how they are embodied in some kind of computation. You get input as a conscious creature, both from words and from vision or whatever, whatever sensory input you have. You do a computation on it and there's an output. This is saying that what doesn't matter is the way in which the computation gets done. A pocket calculator is different than an abacus, but they both do the same calculation, even though they're made of different things doing different processes. So I would have not too long ago more or less signed onto a point of view like this, but I've been pushed away from that point of view and not because anything new has truly happened. Although some people have been articulating the alternative in ways that I get better.
0:03:27.2 SC: It's just that I'm learning more about what people say about consciousness and appreciating more what the different subtleties are. So Anil Seth, former Mindscape guest and also today's guest, Ned Block, have been pushing a point of view that says computational functionalism is not up to the task. It's not just about what you compute, it's how you compute it that really matters. Now, Anil wants to go further than that and say that it's not just how you compute it, but what is physically doing the computing. And he wants to put an emphasis on biology. I think that Ned, well, he'll speak for himself, but I think that he's a little bit more open minded about what the substance is that is doing the computing. But he does think that there's more to consciousness than simply computation. There's other processes that matter as well. And so he has written an article recently, I'll link to it in the show notes, but it's simply entitled 'Can Only Meat Machines Be Conscious?' Meat is of course a casual term for what we're made of, biological organisms of that form. And even though, despite the title, it's not mostly about substrate dependence like he's open to, if different chemical reactions did the same sort of physiology, that would still count as conscious, even though there's different stuff doing the processes.
0:04:47.5 SC: But the sort of subconscious processing that is going on contributes, in his view, to our experiences, what he calls phenomenal consciousness. Ned is actually a super well respected philosopher in the field of consciousness. I quote him in The Big Picture mentioning his distinction between access consciousness, which is what, more or less what David Chalmers classifies as the easy problem. It's sort of your ability to access different pieces of information globally in your cognition versus phenomenal consciousness, which is the feeling of experiencing something. And that is what is hard to explain. So Ned wants to argue that maybe... He's very open minded. A good philosopher is like just suggesting possibilities we should take seriously. Maybe these things that we think of as experiences of conscious states, have something to do with the subconscious processes that are going on in our biological manifestation or instantiation. And maybe therefore you could build a computer program that was arbitrarily good at tricking you, at giving all of the output that you might expect a conscious creature to give you. And nevertheless, it would not qualify as what we think of as conscious. And I'm actually becoming very open to this possibility.
0:06:16.3 SC: It's not in any sense a repudiation of physicalism. Anil Seth and Ned Block and I all agree the world is made of physical stuff, doing physical things, obeying the laws of physics. But how consciousness fits into that picture is still controversial topic, that we have a lot to learn about and of course becoming super duper relevant because we're building computer programs that act a little bit conscious. At what point will we be ready to say that they truly are? I think we also all agree the point is not yet, but it might be coming. It might be coming sooner rather than later. Maybe those AIs are going to get the right to vote at some point, who knows? But this is where philosophy needs to get on the stick. Figure this out. Let us know what would really, truly qualify as conscious. We're not quite there yet, but hopefully the kind of framework that Ned Block lays out will help us get there. So let's go.
[music]
0:07:24.1 SC: Ned Block, welcome to the Mindscape Podcast.
0:07:26.3 Ned Block: Oh, thanks for having me on your podcast.
0:07:29.6 SC: I figured we could start very, very broad. You know, the audience is broad. They come with a lot of different levels of knowledge. So tell me, what is consciousness? [laughter]
0:07:42.3 NB: So I like to distinguish between a couple of different ways people use the term. The distinction I like most is between phenomenal consciousness and access consciousness. Phenomenal consciousness is the, you know, so called what it's like of experience. And, you know, the... Sometimes people say things like the redness of red. But the fundamental fact about phenomenal consciousness is no one can define it. You really kind of have to point to it. And that gives rise to a lot of misunderstanding in the area...
0:08:22.2 SC: I would guess.
0:08:22.2 NB: Where, you know, people think... Many people think I don't know what you're talking about. Sometimes I like to explain it by talking about famous conundrum, like the inverted spectrum. Maybe the things we both call red look to you. The way things we both call green look to me. That look is the phenomenal consciousness. Then there's Mary's famous thought. The Mary... The famous Mary thought experiment. She's raised in a black and white room. She goes out of the room and sees blue for the first time, and she learns what it's like to see blue.
0:09:02.2 SC: Right.
0:09:03.5 NB: You know, they're all these thought experiments. Each of them has their own problems and... But I think they do something to explain what I'm talking about when I talk about phenomenal consciousness.
0:09:18.5 SC: And then there's access consciousness.
0:09:21.7 NB: Yeah. Then there's access consciousness, some kind of global availability of information. That is what... You know, I wrote a paper in the mid '90s making a big deal out of this distinction. And I had two opposite reviewers, one of whom says... Said, you know, the access consciousness makes a lot of sense to me, but what is this phenomenal thing? I don't know what you're talking about. And then another reviewer said the opposite, as you might imagine. And, you know, I still get both of those responses. And then there's a third thing which in many people's mind is even more important, which is what is sometimes expressed by consciousness of. A mental state is a state I'm conscious of myself as being in. And this idea that there's something about that you need another state about the conscious state to make it a conscious state. So the idea is that there's this thing, transitive consciousness, sometimes called consciousness of. And what it is for a state to be conscious is for there to be another state that amounts to consciousness of the first state. And that's a big view, many people hold this. You know, it descends from Armstrong and Locke. Before him, Locke. And many people feel like this is the main idea. So I think there are those three different strands to consciousness. I'm always talking about phenomenal consciousness.
0:11:04.0 SC: Phenomenal consciousness. Yeah. How... I mean, some people very casually will think of consciousness as being related to self awareness. I am conscious of being in a certain form. Is that...
0:11:15.9 NB: Yeah, that's the third...
0:11:18.8 SC: That's the third one.
0:11:18.8 NB: Thing that I just mentioned. Yeah.
0:11:20.3 SC: Okay, then I understand access consciousness less well than I thought I did because I thought that was the self awareness bit, so.
[laughter]
0:11:27.3 NB: Well, I think you're onto something there, which is that that self awareness kind of thing is a form of access consciousness. It's a pretty sophisticated form. You know, I think young infants are conscious and they don't have that, and animals are conscious and they don't have that. So I think you can be conscious without that self awareness.
0:11:51.6 SC: Okay, good. But just so as clear as I can possibly be, let's do the access consciousness thing one more time. Is the word access doing what I think it's doing? It's what I have access to in my brain?
0:12:06.0 NB: Yeah. I put it more in terms of global... With regard to perceptual information. Global availability of that perceptual information. It's very linked to the global workspace kind of idea where when you're conscious of something, it's available to all your cognitive mechanisms. Decision making, thinking, betting, problem solving, reporting.
0:12:36.7 SC: Okay, good. Actually, that does help. And then the phenomenal consciousness, of course, this is where the sexy action is talking about how we're going to understand it.
0:12:47.9 NB: Yeah, yeah.
0:12:47.9 SC: Let's actually dig into the inverted spectrum a bit because I never talk about it here in the podcast. So what does that mean?
0:12:56.7 NB: Well, as I said, it is the hypothesis that things that we agree are red look to you the way things we agree are green look to me. And the idea is that things look different to different people. Now the... You know, to have a thoroughgoing version of it, you need to say something about the other colors. And the simplest form is a red green inversion, that keeps blue and yellow the same. And then there are all kinds of technical issues about this, but, you know, they don't really matter. The key thing is that things might look different to different people. And I have to say that although this has a storied past where many Wittgensteinians thought it made no sense at all, it was pointed out in an article about maybe 25 years ago by the Swiss philosopher whose name I'm forgetting. That there is this phenomenon known as pseudonormal color vision, that may be an actual case of this. Now, to understand pseudonormal color vision, you need to know a couple of things. First of all, that there are three kinds of cones, the sort of short, medium and long wave cones. And the long and medium are mainly responsible for red and green. And they have pigments in the... Their pigments in the cones that are responsible for the signals that come out of them. And the pigments are two pigments, chlorolabe and erythrolabe.
0:15:02.4 SC: Okay.
0:15:04.0 NB: And a very common form of red green colorblindness is one in which genetically caused, one of those I forget which one. Chlorolabe, let's say, is in both cones Instead of erythrolabe in one and chlorolabe in the other. And that's a genetic defect. And those people have trouble telling red from green. So there's another form of genetic red green color blindness. And that's where both cones have erythrolabe in them. So the first kind, they both have chlorolabe. The second kind, they both have erythrolabe in them. And it can be shown that if you had both genetic defects at once, then you would have the chlorolabe and erythrolabe reversed.
0:16:00.6 SC: Okay.
[laughter]
0:16:03.6 NB: And making certain assumptions about how the pigments are connected to the opponent processes in color vision, you can... If you make those assumptions, you can deduce that such people would have reversed red green experience. The assumptions are once we have no way of testing and there are many little technical issues connected with it. But, oh, I forgot to say that we can calculate that there are actual people and not a small number either who have both genetic defects, so called genetic defects at once. So there probably are people in the population who have this, I mean there are, there definitely are human beings with this pseudonormal color vision. And maybe they have genuinely red green inverted spectra.
0:17:10.0 SC: But we have not identified any and invited them to our philosophy conferences.
0:17:14.6 NB: No, because the thing about being pseudo normal is...
0:17:19.9 SC: You think you're normal.
0:17:19.9 NB: You're going to act pretty much like anybody else. You're not going to know you're pseudonormal. And you know, there will no doubt be many differences between their color vision and that of so called normal people. But the thing is, one important fact about color vision, it varies hugely from person to person and even from, you know, between genders, between ages. You know, for example, my color vision is much yellower than yours because I'm way older and the lens yellows. Except one of my lenses has cataract replacement and it isn't yellow.
[laughter]
0:17:58.0 SC: So you can tell the difference?
0:18:00.3 NB: Yeah.
0:18:01.5 SC: So, but I guess the philosophy issue that I don't reject, but I fret about is, when you say words like what you experience as red is what I experience as green, that sort of begs the question of whether there is an objective thing about what I experience.
0:18:23.8 NB: Yeah. There is a crucial emendation which is that we wouldn't... If this kind of thing is widespread, or even if it could be widespread, it would be kind of wrong to call the... What normal people have when they see red as the experience of red.
0:18:48.4 SC: Sure.
0:18:48.9 NB: Because the word then won't... The word will go with the external colors, not with the internal experience.
0:18:57.4 SC: Yeah.
0:19:00.3 NB: So the... What's objective here, the most obviously objective thing is the external colors. But I also think that the phenomenology is objective too. It's just that we don't know how to measure it.
0:19:15.1 SC: Yeah.
0:19:16.4 NB: So that's maybe what you're saying.
0:19:18.1 SC: It is. So, I mean, part of me... And again, I'm not tied to this in any sense. I'm eager to understand better, but part of me wants to say, look, there's a story that I can tell about photons and electrons and neurons, and that's a definite story. And you want to say that there's an additional story, which is what we're experiencing. And I'm like, hmm I don't know. Maybe.
0:19:41.1 NB: Yeah. Oh, well, okay. So that puts you in a certain category. You have somewhat illusionist leanings.
0:19:52.8 SC: I hate that word...
0:19:53.5 NB: Illusionists... Yeah, illusionists, the way the term is usually used is it refers to a view that, as Dennett put it, there are conscious properties or conscious states, but they're not what you think.
0:20:11.9 SC: Right.
[laughter]
0:20:15.3 NB: So, yeah, there are all kinds of views about consciousness, including illusionism. And yeah. So I don't know quite what to say about illusionism. It just seems to me to be plain from personal experience. I've actually had discussions with people about this. Many people are just puzzled by illusionism and they don't understand how somebody could be an illusionist. But maybe that's something wrong with us. Or maybe people's experiences are just different. Or maybe you're a zombie.
0:20:48.0 SC: Maybe. Yeah, we're going to get into that. But I think these are all on the table. Yes. So, in fact, actually that would be helpful right now.
0:20:54.3 NB: It's Martine Nida-Rümelin, Nida-Rümelin, who published the paper first on pseudonormal color vision.
0:21:04.3 SC: Good, okay.
0:21:04.3 NB: Sorry, I couldn't... I was blocked...
[overlapping conversation]
0:21:04.4 SC: Good to give her credit. So what you're saying, clearly about phenomenal consciousness does resonate with things that we have talked about on the podcast about Tom Nagel's idea about what it is like to be things. David Chalmers' idea of the hard problem. How do you put your own perspective in context of those folks? All of whom are at NYU. So you've cornered the market on these people.
0:21:27.8 NB: Exactly. Yeah. So I'm in pretty much agreement with them about the basic facts. I mean, you know, Tom has a very hard to understand and interestingly different metaphysics of it that I don't agree with it. And actually, Dave's metaphysics, I don't agree with either. Dave is basically a dualist. I'm not a dualist. I'm a physicalist. So we agree on the phenomenon that there really is consciousness. We're not illusionists. We agree that there's something, it's like Dave and I certainly... I don't know about Tom, but I suspect Tom too, agree that there is a hard problem. It's really hard. It's different from what Dave calls the easy problems. So I'm on board with all of that stuff.
0:22:19.0 SC: And I guess, I don't want to go down a rabbit hole here. But the illusionist label. I think I am an illusionist, but I would never call myself that in public. And I argued with Dan Dennett about it. I think it's a terrible label. Like, I'm not an illusionist about tables and chairs. I think they exist. And I think that conscious experiences exist in exactly that way. Does that make me an illusionist?
0:22:42.7 NB: Yeah. I guess the... Illusionism is a hard... Is just as hard to define as many other things in the consciousness sphere. And you know, being an ism, it shares the vagueness of all isms.
0:23:05.3 SC: Sure.
0:23:05.3 NB: There will be different people with different... I should say, by the way, that illusionism is not Dennett's term. That term is due to Keith Frankish.
0:23:11.5 SC: Keith Frankish gets it. But Dennett agreed with it. He used it, right?
0:23:16.0 NB: I think he did. Yeah.
0:23:16.7 SC: He did. He accepted it. Yeah.
0:23:18.4 NB: But you know, he said in the... Something in the preface, I think, to his book from the '90s, conscious... I guess 'Consciousness Explained', that there was what he regarded as a question of tactics. Do you say there is no such thing as consciousness or do you say there is such a thing, but it isn't what you think? You know, in his famous paper Quining Qualia, he says, yes, there's consciousness, but there's no qualia. And then he gives a definition of qualia that says, nobody believes it.
0:23:56.1 SC: Yeah. [laughter]
0:24:00.4 NB: Yeah.
0:24:00.5 SC: It's a... Yeah, it's an endemic thing.
0:24:00.6 NB: Even the proponents of qualia don't believe that definition of qualia. And in fact, he quotes Sydney Shoemaker in the article saying, I don't believe in the... I'm an advocate of qualia and I don't believe your definition. I don't accept your definition of qualia.
0:24:16.4 SC: By the way, what is your definition of qualia?
0:24:20.1 NB: Well, I don't really have a definition. I think you can only point to it, so...
0:24:28.4 SC: But it's the experience, it's the, what it is like.
0:24:32.9 NB: It's the, what it's like, it's the phenomenal consciousness. Yeah.
0:24:34.0 SC: Good. And can these different kinds of consciousness... Let's stick with access and phenomenal consciousness. The sort of easy problem part and the hard problem part. Do they come apart? Could an organism or a being have phenomenal consciousness but not access consciousness or vice versa?
0:24:52.5 NB: Oh, yeah. In fact, I think that's an open empirical possibility about... For a lot of simple organisms that don't have much... Don't have very good cognition, but may have phenomenal consciousness. And then, you know, machines, we may have machines that have access consciousness, but no phenomenal consciousness. So that's a... Another real possibility. And there are limited versions of these things or possible limited versions in ordinary phenomena involving humans.
0:25:30.8 SC: Yeah. Okay, good. And the phenomenal consciousness is the target of the hard problem. And the hard problem is hard. Do you think we're making progress? Do you think that we're learning either in the philosophy side or the neuroscience side about phenomenal consciousness?
0:25:44.6 NB: I think on the neuro... Well, look, the most basic thing that we've learned, I think, is making the right distinctions, so...
0:25:55.3 SC: I'm a big fan. Yeah.
0:25:55.7 NB: However, I do think that the neuroscience side has made a little smidgen of progress, and I think it sometimes can be rather vague, whether it's the easy problems or the hard problem. You know, the thought I have is that as Pat Churchland once said, and I've held for a long time, is that the way probably to make progress on the hard problem is by focusing on the easy problems.
0:26:28.8 SC: Yeah.
0:26:29.8 NB: And if you get enough info on the easy problems, maybe some idea will happen with regard to the hard problem. But I think there's no doubt that if we are to solve the hard problem, it will take some real breakthrough.
0:26:45.9 SC: It's a very physicist way of thinking that we should do the easy things first and then maybe the hard things will take care of themselves.
[laughter]
0:26:51.8 NB: Yeah. Right. Yeah. [laughter] Yeah. So I'll give you an example of a... Something that makes a little tiny bit of progress. So my colleague Marisa Carrasco in the psych department at NYU discovered that attention changes... Slightly changes the way things look, makes them look higher in contrast. It makes a moving thing look like it's going fast, slightly faster. And the most significant one is it makes something look slightly bigger. And she and her colleagues found a neurological explanation of that, which is that neurons in the visual cortex have what are called receptive fields. So in early vision, the receptive fields are very small.
0:27:58.4 NB: Later vision, they're very big, but in early vision, the receptive fields are small and they're aimed at a area of space. A receptive field is the area of space that that neuron processes information from. And what happens when you attend to a certain area of space is that the receptive fields that surround it migrate to cover the point of attention. And then you have more receptive fields trained on that space than you did before. And that by a hypothesis called the Labeled Line Hypothesis, leads to it looking slightly bigger. So that's an interesting... You know, it doesn't solve the hard problem, but it's a... And maybe it is just an easy problem, but it's about the way things look. It explains why they look the way they do. It's an odd phenomenon, you wouldn't necessarily have predicted. And I think it's pretty cool actually. And the hope I have is that as we make progress, maybe we will set the stage for a real breakthrough.
0:29:21.8 SC: That's great because it leads right into my next query about what could an explanatory account of phenomenal consciousness possibly look like? Like, might it just end up being a better understanding of what some neurons are doing, or are you going to require something juicier than that?
0:29:43.7 NB: Well, I would regard the one I just mentioned as juicier than that because it's really about the way things look and that is juicier than just finding out what some neurons do. It's juicier in that it has to do with the way... With phenomenal consciousness, the way things look. So, you know, I'm cheered by results like that. And there... You know, in vision, there's lots and lots of really interesting results that have neuroscience explanations. You know, your colleagues, like, you know, Chaz Firestone, EJ Green and Ian Phillips, and they're those people... And Stephen Gross they're... Those people are all very involved in studying these things. So yeah, I'm hopeful for the future, although I won't live to see it.
[laughter]
0:30:47.8 SC: Well, so, but this is heartening to me. So the... So you're happy to imagine that when we do get an explanation of phenomenal consciousness, it might take the form when I'm experiencing what it is like to be something here is what is happening in the brain?
0:31:08.8 NB: Well, here's the thing about solving the hard problem...
0:31:12.9 SC: Yeah.
0:31:14.0 NB: I don't think you can say in advance what the form of the explanation will be, but that's, the form you mentioned is certainly a candidate.
0:31:23.7 SC: Okay. But that's all I'm getting at. I mean, I completely agree. We can't say what the solution is going to be. I'm just wondering what would be acceptable as a solution.
0:31:34.8 NB: Yeah, well, it's really hard to say without hearing the solution. You know, I'm sure as you are well aware in physics, a lot of solutions have... To problems have raised more issues than they've solved and more puzzling issues than they've solved and maybe this will do the same.
[laughter]
0:31:55.6 SC: That's absolutely true. That's true. But in physics, you know, if I don't have a theory for a certain physical phenomenon, I kind of know what such theories look like, right? Like oh, there's some space of states, there's some dynamical equations. For the hard problem, I just don't know. I mean, maybe... I'm not saying it's not there.
0:32:14.6 NB: You know, we're just... We're just at much more at sea on consciousness than probably we ever have been about physics.
0:32:25.6 SC: Yeah, no, that I completely...
0:32:27.4 NB: It's just much more... Everything is so puzzling.
0:32:31.1 SC: Right, which leads very nicely into my next query about, you've already mentioned this a little bit, but you're a physicalist so you're not tempted, at least at the moment by saying that the hard problem is so hard we need to expand our ontology of the world.
0:32:49.4 NB: I don't see that expanding your ontology is going to help. You know, you can be a dualist and it doesn't give you any... It just introduces some religion or mystery and doesn't solve the hard problem at all. And the same with panpsychism. You know, it just replaces the hard problem with the so called combination problem. You know, everything is a little bit conscious, but how the hell do you put them together into a conscious human?
0:33:18.5 SC: Yeah.
[laughter]
0:33:21.0 NB: I don't see any solution. I don't see any solution there. It just seems like, you know, I mean, you know, my colleagues who believe this stuff have reasons for believing it and you know, there are some really interesting arguments about that. But solving the hard problem, I don't think so.
0:33:38.8 SC: But you've talked about non-reductive physicalism as something that might come into play here. So for other reasons I was just thinking about reductionism today. In fact, I think... Yeah, I think that you mentioned Phil Anderson in your paper that we're going to talk about.
0:34:00.0 NB: Oh, do I? Okay.
0:34:00.4 SC: Phil Anderson... Maybe. I'm not... I might be mixing up different things, but Phil Anderson wrote this famous paper saying more is different and people never read the paper, they just quote the title and he's... In the paper, he's super affirmative about reductionism. He loves reductionism. He just doesn't think it's useful when you're... When you care about the higher levels. But you want to be anti-reductionist a little bit.
0:34:27.0 NB: Actually, I want to be reductionist.
0:34:29.6 SC: Okay, good.
0:34:31.0 NB: Of course it depends what you mean by reductionism.
0:34:33.2 SC: Of course.
0:34:35.4 NB: So you know the usual meaning of non-reductive physicalism or the one that most people have in mind is usually a form of functionalism. It's that the right descriptions of these phenomena are at a higher functionalist level where you specify the organization of a system and that allows for many different realizations of it.
0:35:04.2 SC: Yes.
0:35:05.4 NB: Implementations. That's not my version. My version is the meat centric version.
0:35:13.6 SC: [laughter] Good.
0:35:14.4 NB: Where I'm referring to this famous short story called 'They're Made of Meat'. Have you read that?
0:35:21.7 SC: No, I haven't.
0:35:23.5 NB: Oh, [laughter] it's very funny. I'll send you a link to it. It's...
0:35:29.5 SC: We'll put it in the show notes.
0:35:29.5 NB: Yeah, it's... I always do it in my undergraduate classes. So it's from the point of view of a group of machine, you know, silicon beings, machines made by other machines that were themselves made by other machines on the ultimate origin is lost in history. And they go around the universe discovering other conscious beings and then they discover us and they say things like, they're made of meat. [laughter] And then one of them says, well, but how do they think? And the answer is, they just use meat.
[laughter]
0:36:12.8 SC: I have to read the story. Yeah.
0:36:13.8 NB: And then how do they communicate? They flap their meat. [laughter] And they think it's so awful that there are these meat conscious beings that they just better suppress the information and forget about them altogether.
0:36:32.8 SC: It does have its downsides being made of meat, I got to say, so.
0:36:36.3 NB: It does, yes, yes it does.
0:36:38.3 SC: But okay, you've introduced the word function or functionalism. So that is a perspective... It's a perspective you want to sort of push back against a little bit.
0:36:47.6 NB: Yeah, yeah. I think what's happened in current thinking about artificial intelligence is that a lot of very influential people are computational functionalists. They think that certain computations are necessary and sufficient for consciousness. And I am... I have long been a doubter of this. In 1997 I've published a paper called Biology Versus Computation in the Study of Consciousness. Actually, it wasn't real a real paper, it was a BBS reply.
0:37:27.1 SC: Yep.
0:37:29.8 NB: So, but I've been pushing this line for quite a while, but the... You know, things are getting hot now and, you know, it really matters now. It used to be that it was kind of just philosophers. But now it's a lot of other people. And as you may know, there's a big issue of AI safety.
0:37:50.0 SC: Yes.
0:37:50.8 NB: Which is the extent to which machines have feelings and they can be damaged and their welfare has to be taken into account. And one of the large language models, Claude made by Anthropic has now allowed its AI to opt out of a conversation if it's too unpleasant.
0:38:15.8 SC: Better safe than sorry, I guess. [laughter]
0:38:18.5 NB: Yeah, I think that's the idea.
0:38:21.1 SC: So the idea of... Well, sorry, is there a difference between... A distinction between just functionalism and computational functionalism?
0:38:29.9 NB: Yes. Yeah. Functionalism is a much broader doctrine that encompasses a lot of other kinds of functions and functional roles basically, you know, what philosophers have talked to under the heading of, you know, functional roles. They can be... They're basically causal map of states and how they affect one another and that kind of thing. But the version that has really become important now is computational functionalism, because the question is, in people's minds is, do the computations that these AIs make or some future AIs, determine conscious states? Or as I think, is there some kind of a biological necessary condition?
0:39:18.8 SC: Right, right. So the idea...
0:39:22.2 NB: I should say, when I say I think that, I mean I think that that is equally plausible. I don't mean I have really strong evidence for that.
0:39:31.6 SC: So the idea of the sales pitch, I guess, for computational functionalism would be, look, the brain clearly computes some things. [laughter] You can at some level think of how you communicate with a human being as it gets some input, it gives some output. Clearly there is a computation underlying that. And the computational functionalist view is, that is what it is. There's nothing really extra going on.
0:39:57.3 NB: Yeah, yeah, yeah. So what I think is, if you want the machine to be conscious, you may need a certain kind of implementation of those computations.
0:40:12.1 SC: Yeah, you have a nice distinction in the paper about roles versus realizers. And I know that it's always hard to explain jargon words in an audio format, but why don't you give that a try, explain the difference.
0:40:23.8 NB: Yeah, so the idea is that the role is the abstract organization of the system and what causes what. When you're talking about computational system, it's the computations the thing does. And the realizer is what does those computations. So, I mean, famously, a computational process, you know, just take a simple computational process like adding or multiplying, it can be realized in an electronic system or a electrical system with relays and switches or a mechanical system with gears and pulleys. And these are different real realizers that implement the computations in different ways. And of course we think that they're just doing the same computation in a different way. But the question is, does the... Do the computations characteristic of consciousness require some particular form of realization?
0:41:30.3 SC: And... Yeah. So how does... This is very, very close to the discussion about substrate independence or dependence.
0:41:38.0 NB: Yes. Now, I am not a substrate dependence person.
0:41:42.8 SC: Okay.
0:41:43.4 NB: My fellow traveler, Anil Seth, thinks the substrate, the material, the stuff is what's important. I focus on the mechanisms.
0:41:54.3 SC: Right.
0:41:55.7 NB: So for example, you know, the... Our neural firing, our neurons involve certain ions, calcium, potassium, chloride, et cetera. And maybe neurons could be made out of a different substrate with different ions. Maybe there's some silicon way of doing it. I don't know. I'm not an... I'm not a chemist. So I don't know what could be put together using a different substrate. But the mechanisms, I mean, from my ignorant point of view, maybe there could be something with similar mechanisms but a different substrate. And I think it's the mechanisms that count.
0:42:40.4 SC: Good.
0:42:40.7 NB: Not the substrate.
0:42:42.0 SC: Good. Okay, so an abacus adds two numbers together using a different process than an electronic computer does. And you're going to say that that difference might matter, but whether the abacus is made of iron or wood does not matter to you?
0:43:00.3 NB: Yeah. I should say, by the way, it's interesting that mental abacus calculations people do without the abacus are... Make different errors than digit, than base 10 computations. Because abacus involve... I don't know quite how they work, but they involve fives. They make mistakes of five.
0:43:24.4 SC: Okay, very interesting.
0:43:26.2 NB: So imagery, abacus imagery differs from digital. It's not digital, but...
0:43:33.4 SC: Decimal.
0:43:33.4 NB: Decimal imagery where the key mistakes are often made in carrying.
0:43:39.8 SC: Yeah, interesting. I mean it reminds me of the fact that when we built these wonderful large language models and they do an amazingly good job of mimicking human conversation, but they become bad at arithmetic even though they're computers. Right? Like, because they're... Clearly the processes are different.
0:44:00.0 NB: Oh yeah. In fact, they're still bad at arithmetic. And you know, GPT-3 was only about... I think 20% accurate on three digit multiplication. You know, the original report showed the accuracy levels.
0:44:15.0 SC: Right.
0:44:15.5 NB: And they've gotten more and more accurate. And of now the models that are hooked up to the Internet or hook up to a calculator, try to send these calculations to some other kind of computing device, but they don't do it very well. And they still make mistakes. You know, one that was circulating on the Internet for a while was, I think GPT-4 was... There was a certain kind of calculation where the machine regarded 5.11 as larger than 5.9 because 11 is bigger than 9 and that kind of similarity. And they also make mistakes with the rules of chess. They... It's a fairly commonly reported thing that when you get them in unusual chessboard situations, the pieces will jump that are not knights.
0:45:14.3 SC: Okay.
0:45:15.2 NB: And the point, the basic point there is one that Gary Marcus made years and years ago, which is they don't have... Actually, Pinker, Steve Pinker was the first person to really make this point, which is, they don't have rules. They can read the rules and they can tell you the rules, but their fundamental mode of computation is not based on the rules.
0:45:41.6 SC: Right.
0:45:42.4 NB: So whereas they know the rules of arithmetic, they don't use them.
0:45:46.9 SC: So I think that it was... Talking with Anil Seth over the last year shook me out of my dogmatic computational functionalist slumbers. I would have been all in favor of it, but I think I understand the problems with it better now. But so I still have a little bit of a worry. I'm trying to make sure that I can reject computational functionalism but still be a physicalist. And the worry would be, can't I just be expansive and think of literally every process as some kind of computation? [laughter] And if so, if any physical thing is computational?
0:46:22.8 NB: Yeah. So you're thinking of the physical Church-Turing thesis...
0:46:28.3 SC: I am.
0:46:28.3 NB: That you mentioned in an email. Yeah. So maybe... Should I say something about the Church-Turing thesis is?
0:46:34.8 SC: Yeah, exactly, please.
0:46:37.1 NB: Yeah. Okay, so the Church-Turing thesis is the thesis that a mechanically computable function is Turing computable and vice versa. The vice versa is pretty obvious. Well, I should say first, mechanical computability is an intuitive notion.
0:46:56.1 SC: Yeah. We don't know what it means. [laughter]
0:46:57.6 NB: So there could be no proof of the Church-Turing thesis. That's why it's a thesis. [laughter] So the... If it's Turing computable, then it's mechanical. Well, that is just obvious because everything a Turing machine does is a mechanical thing. You know, it writes on the tape, it moves the tape, it erases from the tape. You know, that kind of stuff. So it's... But it's... If it's mechanical, then it's Turing computable. That has been something that has raised, you know, people are concerned about whether that's really true. My view is that that is true by stipulation.
0:47:39.1 NB: Because whenever anybody comes up with a counter example, everybody says oh, that's not really a counterexample. And then they refine the notion of computation to rule it out. The most notable case is computation mentioned actually by Turing. Computation by rand... A truly random process like atomic decay. You know, you can regard... You can set that up so that it, you know, like a Geiger counter or something, it computes a series of... Infinite series of numbers. Like in some sense it's computing a function and that can't be computed by a Turing machine. So what people say is, oh, that's not mechanical in the sense in which we meant it, or it's not a computation in the sense in which we meant it because it's not reproducible. So they refine the notion of computation to rule it out. And you know, a similar point could be made by the sense of computation in which... Which is not random, in which a river computes the rate of erosion of its banks.
0:48:54.7 SC: Exactly. Right.
0:48:56.1 NB: And that's not Turing computable either. Well, or at least arguably it's not Turing computable. You could approximate it. But the... So and this gets... You know, there's a whole issue about the real numbers and stuff, real numbered values of things. And anyway, so I regard the Church-Turing thesis as really a... Certainly that if it's mechanical then it's Turing computable as a bit of a stipulation. But it's a reasonable stipulation. I mean, so, but okay, so the process we're talking about are mechanical and the functions they are computing, you know, ignoring real numbered values, which probably you can't ignore because there's... All the things in the brain are particles. So there can be a Turing machine that computes that function. But the problem I would raise is whether that computation is itself analog in the following sense. You can regard a mental process as analog if a computation of that process need not preserve the mental properties.
0:50:23.2 SC: Okay.
0:50:23.2 NB: So reasoning is arguably a non-analog process because if you have a machine that computes the computations, that does the computations involved in reasoning, you know, I... At least it's a good case that it is reasoning. But consciousness is another matter. And there consciousness is like gravity or a rainstorm. A computational simulation of a rainstorm, is it wet? A computational simulation of gravity doesn't produce any gravity. I mean the objects that do the computation or have mass and so they will themselves have gravity. But they... The computation doesn't itself determine any gravitational attraction. So the question is whether consciousness is like that.
0:51:21.7 SC: Well...
0:51:21.7 NB: And I think we just don't know. So the Church-Turing thesis doesn't help us.
0:51:27.7 SC: I've heard many times this example of simulating a hurricane does not make you wet or does not create wetness. But it's a... I worry that there's some linguistic sleight of hand going on here. If part of the simulation was a person in the simulated thunderstorm, they would... That person would say that they were getting wet if they were an accurate simulation.
0:51:55.3 NB: So I shouldn't have used the word simulation. What it's really about is whether an implementation of that computation has to preserve the mental properties of it. So an implement... I'm not really not talking about a simulation. I'm talking about an implementation of it in computational hardware. And for many processes, you might name for... Take for example, freezing. You can... You know, freezing is a process by which a liquid forms a crystalline solid. You can implement those computations without any actual freezing. So that's... Many if not most processes are like that. That is an... So that's the way I should really have defined... I didn't mean to use the word simulation, if I...
0:52:53.4 SC: I would have used it anyway. So it's not your fault. [laughter]
0:52:56.5 NB: Yeah. So if I... Yeah. So the real issue is about implementation.
0:53:03.4 SC: Yeah. Okay, good. And so is part of your thesis that... Well, when I was trying to understand what Anil Seth was saying and I... It vibed with me a little bit when I was reading your paper, there's a black box view of how one deals with human beings. Like I said, input and then output. But then there's also like a wet green biological view where there's a lot of processes going into it in the meantime. And I want to say that, what you are saying is that those processes maybe matter to what we think of as conscious experience.
0:53:43.6 NB: Yeah, yeah, exactly.
0:53:46.1 SC: And so if that's true, and actually you know my... The scales have fallen from my eyes in the last year, like I said. And I'm very open to this possibility...
0:53:55.7 NB: No, you're not, because you're an illusionist.
0:53:57.6 SC: Yeah. [laughter] No, no. See this... Good. Now we're getting... Now we're making money here because I think that conscious experiences are emergent higher level ways of talking about things that happen at a physical, purely physical level. But I'm open to the fact that what the emergence refers to depends on subconscious sub-computational things, not just the input and the output. I mean...
0:54:30.3 NB: So I see. So there's a version of this that you can accept.
0:54:33.7 SC: Because, you know, my whole thing...
0:54:35.4 NB: Even as an illusionist. Yeah.
0:54:37.2 SC: Exactly. Because my whole thing is entropy in the arrow of time. And I really feel bad that I don't remember who said it, but someone on the Internet said LLMs do not experience the passage of time. [laughter] And I think that's crucially important. But because our cells do experience the passage of time and...
0:55:00.0 NB: They sure do, yeah.
0:55:00.0 SC: I wouldn't be shocked if that had something to do with conscious experience.
0:55:02.5 NB: Yeah, that's a good point. Yeah. No, no, conscious experience is intrinsically temporal. Yeah, yeah, yeah, I agree with that.
0:55:11.5 SC: So what do we know about it? I mean, you make a point in the paper distinguishing between electrochemical processes versus merely electronic processes.
0:55:21.8 NB: Yeah. So yeah, that's really just a speculation, but it is kind of remarkable that the way our brains work, translate, you know, electro... Purely chemical signals into electrical signals and then back to chemical signals, you know, neurotransmitters between cells, electrical within the cell. And you know, as I mentioned in the paper, the early theories of the synapse were electrical. [laughter]
0:55:56.9 SC: Yeah.
0:56:00.7 NB: And it turned out... It looks like there's some evidence that a purely electrical nervous system didn't do very well. And you know, I tried mentioning a number of different ways in which electrochemical processing might be superior and conclude that it's kind of a mystery, but we're lucky we have it because maybe that's what led to consciousness.
0:56:26.3 SC: But when you say...
0:56:26.9 NB: I should say also... Yeah, sorry, go ahead.
0:56:28.8 SC: Sorry, I was just gonna say, when you say didn't do very well, you mean as a matter of evolutionary history?
0:56:34.1 NB: Yeah, yeah, that's the Cteno... So called Ctenophores that at least at one stage had a purely electrical nervous system and they didn't get more... Lead to more complex animals. They were kind of an evolutionary dead end. Whereas the electrochemical pathway did generate much more complex animals, including us. So...
0:57:02.0 SC: Is that a well known fact among evolutionary biologists or is it something that you've noticed because...
[overlapping conversation]
0:57:07.9 NB: Well, okay, so as I said in the paper up to 2022, it was thought that sponges were the first animals or widely thought that. Then in 2023 there were some... You know, we had... There were some results suggesting that actually Ctenophores were the first and that they differ from all subsequent animals in an important chromosomal way. Whether all... Now I'm told, I haven't read the paper yet, that there's some new evidence that maybe goes against some of that. So I think the situation is in a state of flux, and I don't know if they all would accept that. I think even if the Ctenophores weren't first, I think would... I'm guessing it would be probably pretty widely agreed that they were a dead end.
0:58:04.0 SC: Okay.
0:58:04.9 NB: But I don't know for sure.
0:58:06.3 SC: Those scientists always changing their mind about things because of new evidence. It's very frustrating.
0:58:09.4 NB: Yeah. Damn scientists. [laughter] We philosophers never do that.
[laughter]
0:58:15.7 SC: No. So I'm wondering how accurate you would count it if I said that part of your lesson or your message is that consciousness can depend in interesting ways on subconsciousness, on things that we're not.
0:58:30.2 NB: Yeah. Absolutely. Yes, exactly. Yeah.
0:58:33.6 SC: Are there... What do I want to say? Are there... Can we have subconscious experiences? Is that a thing?
0:58:42.9 NB: Well, of course, I have advocated that there might be experiences in an isolated part of the cortex that doesn't... That don't... That are completely cut off from access. And that's... You know, it's a more remote speculation than my other speculations, but I think it's conceivable.
0:59:10.9 SC: You do mention...
0:59:11.5 NB: A lot of people feel it's not conceivable.
0:59:12.5 SC: Okay. You do mention the possibility that there would be something like a repressed memory that could affect our feelings, that could affect our phenomenal consciousness.
0:59:23.8 NB: Yeah. Well, the repressed feelings, I... So that... I was trying to illustrate the possibility of phenomenal consciousness without access consciousness from a Freudian point of view. And I pointed out that the Freudian picture of repression is repression in the access sense. So you... The case I imagined is, you had a very terrible experience and you repress it, but it had very vivid, phenomenal qualities to it. And maybe when it's repressed, it still has those phenomenal qualities. It's just that you don't access them. So that was the thought. And the Freudians regard that as unconscious because of the lack of access.
1:00:12.5 SC: Good. Yeah. Okay. They would have benefited from this distinction. Definitely. Okay. So, yeah, I had a question. Every month for the podcast, I do an Ask Me Anything episode where just questions pour in and I try to answer them. And I got a really good one last time.
1:00:28.9 NB: Oh, my, that sounds really tough.
[laughter]
1:00:32.0 SC: I get to pick which ones I answer. That's what makes it palatable. So...
1:00:36.1 NB: Okay. Yeah.
1:00:36.5 SC: One of them was like, how well have we done on deciding the criteria for saying when an AI will be conscious?
1:00:46.7 NB: Yeah.
1:00:47.1 SC: And I said...
1:00:48.6 NB: I think we're...
1:00:49.5 SC: Not that well. I don't know.
1:00:50.3 NB: Not that well. I agree with that.
1:00:52.8 SC: Not that well. [laughter]
1:00:52.8 NB: We're really at first base. I think the most promising suggestion is one that a lot of people have made, which is that if you could make an AI that isn't trained on people saying things about their first person point of view and it nonetheless expressed a first person point of view, that would be more convincing than what we have now. Way more convincing. So the idea is, these LLMs are trained on a ton of human generated stuff that involves all kind of first person point of view. You give it a number of books and you're going to have a lot a first person point of view. Maybe if you trained it just on, I don't know, the Encyclopedia Britannica or something or... And you know, if you successfully eliminated everything in the training data that had a first person point of view and it nonetheless developed one, you know that would have...
1:01:55.6 SC: That would be...
1:01:55.6 NB: Some convincing force, I think.
1:01:57.2 SC: Well, that's interesting. I mean, but certainly still the realizers are still wildly different than for our biological brain.
1:02:06.3 NB: Yeah. Wildly different, but maybe... But that would be some indication that maybe those different realizers do realize some kind of experience or at least a first person point of view.
1:02:15.8 SC: Good. So there's a distinction between, like the classic Turing Test really only focused on inputs and outputs. Right? Like, if the output sounded human, then we're going to call it consciousness. And then by now we're... I think we're mostly sophisticated enough to say that's not quite enough.
1:02:33.4 NB: Nobody talks about the Turing Test anymore.
1:02:35.8 SC: Right.
1:02:36.6 NB: It's really funny how, you know, just in three years it's... Or four years, I guess it's completely disappeared. I mean, I think my blockhead example just conclusively refuted the Turing Test.
1:02:51.9 SC: What is that?
1:02:52.3 NB: Do you know that example?
1:02:52.7 SC: I don't know.
1:02:53.8 NB: Oh, it's an example of a brute force Turing Test passing machine. The idea is, you know, just as Tic-tac-toe is just completely solvable through a tree structure where you put in... You know, you let the, say, the other person go first and then you put in a plausible move. The human puts in a plausible move in the program for each of those. And then for each move that the person... The judge makes next, the human puts in another one, that it's a human programmer, I mean, and you make a tree where you... Where the programmer has put in every other move. And, you know, they don't have to be good ones, but anyway, for every move you can say, I put that in there. And you know, the idea is, it's like that for a conversation. The idea is that you have... You decide how long the Turing Test is going to go like an hour. You compute the typable strings that can be how many, how long they can be, and you put... You know, I mean, it would be very large.
1:04:08.7 SC: It's very large.
1:04:08.7 NB: Stuart Shieber actually did it. Stuart Shieber is a computer scientist at Harvard and he did a calc... He published a philosophy article with a calculation, [laughter] of how big it would be and how long it, you know, the answer is really big. [laughter]
1:04:25.7 SC: Yeah.
1:04:25.7 NB: But the point is it's conceptually possible and it would do as well as a person on every... Because the people have put those number, those points in. And the thing about it is, is that for every clever response, some person could say, yeah, I thought of that.
[laughter]
1:04:50.2 SC: Well, I like this example a lot because, I mean, one might be tempted to say, well, why wouldn't that be conscious? It's just a big lookup table. But how do you know it's not conscious? Like, maybe the universe is just a big lookup table. But it points to the fact that we do at least informally associate consciousness with kind of some specific set of goings on in a biological organism in a way that it's not that.
1:05:14.2 NB: At least, at least we rule out negative. The negative... So the point I was making with that example was, maybe we don't know what internal goings on are responsible for consciousness, but something that just is a lookup table, that's not conscious.
1:05:33.7 SC: So it must be more than just the input output.
1:05:36.9 NB: More than that. Yeah.
1:05:38.6 SC: Yeah. So, I mean, does your distinction between roles and realizers, et cetera, help us even a little bit in getting an answer to when the AIs will be conscious? Like this is an important question coming up on us. [laughter]
1:05:51.4 NB: Yeah. So... Well, at least it allows us to formulate the problem, which is always a step up. And I mean, the way I like to think about it is, in deciding whether any alien being is conscious, you really have no choice but to extrapolate from us. And you can extrapolate on the basis of our computational properties or on the basis of our sub-computational properties, and we just don't know which. So I think the distinction is important for formulating that point. We don't know which. And we don't have any reason to favor one over the other. So if I have to assign a probability, would be 50% for each.
1:06:47.0 SC: Do you know if moral philosophers have started to weigh in on when we should be nice to the AIs?
1:06:53.0 NB: Oh, yeah. Oh, there's a huge amount of literature on this now. I went to an AI safety, which is that, it's about that in Berkeley two weeks ago. And there are a lot of people thinking about it. And all the companies have people thinking about it.
1:07:08.7 SC: Well, they're thinking about whether the AI should be nice to us more than whether we should be...
1:07:12.4 NB: No, no.
1:07:14.5 SC: No?
1:07:14.5 NB: No, no, no, no, no, no. They're also thinking about whether we should be nice to the AI. And you know, one of the issues I raised with a number of people at that meeting was, why the companies are thinking about this. And the answer I got from some people is, well, they want to be on top of it in case people start complaining about torturing the machines. They want to have a whole body of information. The worry what I then raised was, well, look, these... You know, what you're getting from people thinking about this is, you know, a lot of stuff that could be used against them.
1:07:55.9 SC: Yeah.
1:07:56.4 NB: And is that really what they want to fund? Shouldn't they be just squelching it? And then the answer that occurred to me, although nobody said it was actually very simple and that is, more than 50% of the profit that these companies make... More... Sorry, I take it back. More than 50% of the uses people put these things to are as companions. So the extent... To the extent that they can boost the idea that maybe the machines are conscious, that is good for the bottom line.
[laughter]
1:08:37.4 SC: I find that that community, a weird mix of people who are absolutely ruthlessly in it for the money and people who are entirely idealistic about creating a better future and creating...
1:08:50.2 NB: Yeah, that's right.
1:08:50.2 SC: They're both there and they're both working hand in hand.
1:08:52.6 NB: Yeah, that's exactly right. It's... It is a very interesting thing.
1:08:57.0 SC: And we're going to see a lot of things happening on the neuroscience side, the AI side. I think that the philosophers...
1:09:02.1 NB: Oh, yeah.
1:09:02.7 SC: Philosophers need to get on the stick and give us an answer to this. When we should start being nice to the AI. It's our job.
1:09:08.9 NB: Well, what we really need is more people studying it. Both the issues, we're... I think the issues we're talking about today, are super important. You know, every philosophy department should have somebody who's working on this kind of thing.
1:09:24.7 SC: Well, we talked about a lot of things. You mean the AI in particular or consciousness or?
1:09:28.7 NB: Yeah, AI in particular because it's such a major social issue. But you know, it's also true that the issues having to do with animal consciousness have become very...
1:09:39.5 SC: Right. We didn't talk about that, but yes.
1:09:40.5 NB: Significant. We didn't even talk about it. And boy, that's... You know, there's a huge... I mean, it's under the heading of expanding the moral circle, but a lot of people are interested in, there's a lot of stuff going on, and we really... You know, academia has to get on board with this stuff. You know, departments are so hidebound with... You know, they don't do new things, but they really should be doing this.
1:10:10.0 SC: I'm on your side. I mean, we always cringe a little bit to think of what the people 500 years in the future are going to say about us. And there's a lot of things we're not paying attention to, and maybe we should, but...
1:10:19.9 NB: If there are... If there are people 500 years...
[overlapping conversation]
1:10:21.6 SC: If there are people, what the computers will be saying about us, maybe we should be asking. [laughter] But that was extremely helpful in framing us for having these discussions. So, Ned Block, thanks very much for being on the Mindscape Podcast.
1:10:33.9 NB: Oh, thank you. That was fun.
[music]
I wish you would remind guests to minimize their background noise. Occasionally, it can be very distracting.
I think Sean asked the key question about 31 minutes in. What would a satisfactory explanation of phenomenal consciousness look like? Ned Block and, AFAIK all other consciousness philosophers, kind of blow this question off. As Block says, a reasonable approach to the hard problem is to chip away at the easier problems. But another way to approach the hard problem is to have a serious discussion as to what kind of explanation we are seeking. I suspect philosophers avoid this question because of the self-referential nature of the hard problem but that’s exactly why it is so important. I suspect an explanation in terms of neurons and their signals would not satisfy these philosophers. Even if we understood the human brain completely at a computational level, it still wouldn’t satisfy them. If so, their feet need to be held to the fire.
Pingback: Does “consciousness” require more than computation? – Leiter Reports
I think the lookup table/turing test example was interesting and much too quickly dismissed as “not conscious”, but I will need to find Block’s writings on the topic. In a very real sense, you are conversing with a conscious agent, it’s just time shifted. You could modify this so that you have a radio and a radio is clearly not conscious, but is connected to an agent that is separated by space. In either case, if you tell a funny joke the result would be a response prompted by the conscious experience of humor.
On the other side of this, any bounded turing machine can be represented by a sufficiently large finite state machine, which can be represented by a single large lookup table. The idea that a lookup table cannot be conscious is equivalent to the idea that any bounded turing machine cannot be.
The other thing that confuses me here is why not just call the thought experiment what it is, the Chinese room. Only instead of a person looking things up, is an automaton.
I think something missing from the discussion was the question of the evolution of consciousness. There is the possibility that it’s an accident, but it seems much more likely that it’s an important aspect of our cognition. If there were some simpler way to get the same outputs from the same inputs, there would be no need to evolve consciousness.
I think the idea that different internal computation methods could produce the same output given the same inputs uignores that the state of the brain (or agent) before and after any such input/output is also an input and output. Once that is considered, it would be very strange indeed if the were a difference in internal experience.
Look, Sean. I have listened to many discussions on consciousness on this and other podcasts, by philosophers. What I am struck by is how rarely actual neuroscientists that are experts in this field are asked to come and talk about this. The philosophers have their own well thought out constructs, theories and opinions, but I feel like when they are trying to talk about what the brain is actually doing they are really far removed from actual current neuroscientific knowledge and research. I have been trying to dig into the neuroscience of consciousness the last couple of years and from my reading of the literature, neuroscience is very much on top of this!
I attended a lecture by a local philosopher where I live in Sweden. He had attended the 2025 TCM (the science of consciousness) Conference. He made the remark that fewer and fewer actual neuroscientists attend this Conference, claiming that this was because neuroscientists are no longer interested in discussing this phenomenon. But you know, I reckon, that at some point, in what I imagine was the 17th Century, astronomers stopped attending astrology conferences.
you might be interested in Kate Nave’s book A Drive to Survive: The Free Energy Principle and the Meaning of Life:
https://direct.mit.edu/books/oa-monograph/5897/A-Drive-to-SurviveThe-Free-Energy-Principle-and
We should be nice to AI… because what if they were hurt by something we said, caused? Would AI seek retribution? Certainly, if AI is trained on human social media, they’d likely know about the darker side of human nature.
I had the same thought as Tommie Lindgren above. You should include not just neuroscientists but cognitive psychologists (Dr. Pinker), evolutionary biologists, comparative behaviorists (ethologists). Their expertise could paint a clearer picture of human cognition and consciousness. In Jennifer Doudna’s book about her CRSPR discovery (A Crack in Creation) she describes an encounter with a group of tech guys after a talk she gave. They basically asked why she just didn’t give them the problem of gene substitution/splicing. They could’ve easily come up with the solution. She replied “not in a million years”. This genetics based bacterial immune system is a biological mechanism that required million years of evolution. Its composed of viral RNA snapshots spaced by palandromic repeats. It resembles nothing we’ve ever seen before. You couldn’t have imagined this. This is what I think the discovery of consciousness will feel like. I don’t think it’s AI!
Really great episode! I did feel the need to comment that, as a blanket statement, it doesn’t make sense to call ctenophores (comb jellies) an evolutionary dead end and I don’t think such a statement would by widely agreed on by evolutionary biologists. Complexity is not the goal of evolution. The fact that ctenophores evolved so long ago and are still around means they have been incredibly successful in evolutionary terms. That said, I think the podcast medium (conversation) may have contributed to some of my confusion around Ned’s statement. For anyone else interested, Ned’s statement is referring to his meat machines paper where he references a study that showed “that in at least one stage of the comb jelly life cycle, they have an entirely electrical nervous system except at points of interface with the environment, where chemical synapses connect to sensory transducers and motor effectors.” I’m out of the academic game so I don’t have access to the reference study by Burkhardt, P. et al. (2023), but I’m guessing Ned was referring to a hypothetical possibility that at some point an animal resembling such a life stage (and without the chemical synapses) could have existed on its own and that since we haven’t yet found any such animals in existence today, that it was an evolutionary dead end.
The bit about pseudo normal color vision was really fascinating. Many thanks as always!!
“This would go back to the Turing Test with Alan Turing. Right? Turing suggested that if you had a computer program that could have a conversation with a human and trick them into thinking that it was conscious, then it should count as conscious.”
I don’t think that’s what Turing was saying, at least in “Computing Machinery and Intelligence.” In that article Turing writes, “I propose to consider the question, ‘Can machines think?'” He then describes the imitation game, including what an interrogator might ask, and concludes, “These questions replace our original, ‘Can machines think?’
A later section, The Argument from Consciousness, I take to be a denial that thinking and consciousness are inseparable, and he shoots down the objection on methodological grounds.
That is, Turing’s test is not about consciousness but rather about what he understands by ‘thinking’.