339 | Ned Block on Whether Consciousness Requires Biology

It's become increasingly clear that the Turing Test -- determining whether human interlocutors can tell whether a conversation is being carried out by a human or a machine -- is not a good way to think about consciousness. Modern LLMs can mimic human conversation with extraordinary verisimilitude, but most people would not judge them to be conscious. What would it take? Is it even possible for a computer program to achieve consciousness, or must consciousness be fundamentally "meat-based"? Philosopher Ned Block has long argued that consciousness involves something more than simply the "functional" aspects of inputs and outputs. "Can Only Meat Machines Be Conscious?"

ned block

Support Mindscape on Patreon.

Ned Block received his Ph.D. in philosophy from Harvard University. He is currently Silver Professor in the Department of Philosophy at New York University, with secondary appointments in Psychology and Neural Science. He is also co-director of the Center for Mind, Brain, and Consciousness. He is Past President of the Society for Philosophy and Psychology and was elected a Fellow of the American Academy of Arts & Sciences.

10 thoughts on “339 | Ned Block on Whether Consciousness Requires Biology”

  1. I wish you would remind guests to minimize their background noise. Occasionally, it can be very distracting.

  2. I think Sean asked the key question about 31 minutes in. What would a satisfactory explanation of phenomenal consciousness look like? Ned Block and, AFAIK all other consciousness philosophers, kind of blow this question off. As Block says, a reasonable approach to the hard problem is to chip away at the easier problems. But another way to approach the hard problem is to have a serious discussion as to what kind of explanation we are seeking. I suspect philosophers avoid this question because of the self-referential nature of the hard problem but that’s exactly why it is so important. I suspect an explanation in terms of neurons and their signals would not satisfy these philosophers. Even if we understood the human brain completely at a computational level, it still wouldn’t satisfy them. If so, their feet need to be held to the fire.

  3. Pingback: Does “consciousness” require more than computation? – Leiter Reports

  4. I think the lookup table/turing test example was interesting and much too quickly dismissed as “not conscious”, but I will need to find Block’s writings on the topic. In a very real sense, you are conversing with a conscious agent, it’s just time shifted. You could modify this so that you have a radio and a radio is clearly not conscious, but is connected to an agent that is separated by space. In either case, if you tell a funny joke the result would be a response prompted by the conscious experience of humor.

    On the other side of this, any bounded turing machine can be represented by a sufficiently large finite state machine, which can be represented by a single large lookup table. The idea that a lookup table cannot be conscious is equivalent to the idea that any bounded turing machine cannot be.

    The other thing that confuses me here is why not just call the thought experiment what it is, the Chinese room. Only instead of a person looking things up, is an automaton.

    I think something missing from the discussion was the question of the evolution of consciousness. There is the possibility that it’s an accident, but it seems much more likely that it’s an important aspect of our cognition. If there were some simpler way to get the same outputs from the same inputs, there would be no need to evolve consciousness.

    I think the idea that different internal computation methods could produce the same output given the same inputs uignores that the state of the brain (or agent) before and after any such input/output is also an input and output. Once that is considered, it would be very strange indeed if the were a difference in internal experience.

  5. Look, Sean. I have listened to many discussions on consciousness on this and other podcasts, by philosophers. What I am struck by is how rarely actual neuroscientists that are experts in this field are asked to come and talk about this. The philosophers have their own well thought out constructs, theories and opinions, but I feel like when they are trying to talk about what the brain is actually doing they are really far removed from actual current neuroscientific knowledge and research. I have been trying to dig into the neuroscience of consciousness the last couple of years and from my reading of the literature, neuroscience is very much on top of this!

    I attended a lecture by a local philosopher where I live in Sweden. He had attended the 2025 TCM (the science of consciousness) Conference. He made the remark that fewer and fewer actual neuroscientists attend this Conference, claiming that this was because neuroscientists are no longer interested in discussing this phenomenon. But you know, I reckon, that at some point, in what I imagine was the 17th Century, astronomers stopped attending astrology conferences.

  6. We should be nice to AI… because what if they were hurt by something we said, caused? Would AI seek retribution? Certainly, if AI is trained on human social media, they’d likely know about the darker side of human nature.

  7. I had the same thought as Tommie Lindgren above. You should include not just neuroscientists but cognitive psychologists (Dr. Pinker), evolutionary biologists, comparative behaviorists (ethologists). Their expertise could paint a clearer picture of human cognition and consciousness. In Jennifer Doudna’s book about her CRSPR discovery (A Crack in Creation) she describes an encounter with a group of tech guys after a talk she gave. They basically asked why she just didn’t give them the problem of gene substitution/splicing. They could’ve easily come up with the solution. She replied “not in a million years”. This genetics based bacterial immune system is a biological mechanism that required million years of evolution. Its composed of viral RNA snapshots spaced by palandromic repeats. It resembles nothing we’ve ever seen before. You couldn’t have imagined this. This is what I think the discovery of consciousness will feel like. I don’t think it’s AI!

  8. Really great episode! I did feel the need to comment that, as a blanket statement, it doesn’t make sense to call ctenophores (comb jellies) an evolutionary dead end and I don’t think such a statement would by widely agreed on by evolutionary biologists. Complexity is not the goal of evolution. The fact that ctenophores evolved so long ago and are still around means they have been incredibly successful in evolutionary terms. That said, I think the podcast medium (conversation) may have contributed to some of my confusion around Ned’s statement. For anyone else interested, Ned’s statement is referring to his meat machines paper where he references a study that showed “that in at least one stage of the comb jelly life cycle, they have an entirely electrical nervous system except at points of interface with the environment, where chemical synapses connect to sensory transducers and motor effectors.” I’m out of the academic game so I don’t have access to the reference study by Burkhardt, P. et al. (2023), but I’m guessing Ned was referring to a hypothetical possibility that at some point an animal resembling such a life stage (and without the chemical synapses) could have existed on its own and that since we haven’t yet found any such animals in existence today, that it was an evolutionary dead end.

    The bit about pseudo normal color vision was really fascinating. Many thanks as always!!

  9. “This would go back to the Turing Test with Alan Turing. Right? Turing suggested that if you had a computer program that could have a conversation with a human and trick them into thinking that it was conscious, then it should count as conscious.”

    I don’t think that’s what Turing was saying, at least in “Computing Machinery and Intelligence.” In that article Turing writes, “I propose to consider the question, ‘Can machines think?'” He then describes the imitation game, including what an interrogator might ask, and concludes, “These questions replace our original, ‘Can machines think?’

    A later section, The Argument from Consciousness, I take to be a denial that thinking and consciousness are inseparable, and he shoots down the objection on methodological grounds.

    That is, Turing’s test is not about consciousness but rather about what he understands by ‘thinking’.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top