230 | Raphaël Millière on How Artificial Intelligence Thinks

Welcome to another episode of Sean Carroll's Mindscape. Today, we're joined by Raphaël Millière, a philosopher and cognitive scientist at Columbia University. We'll be exploring the fascinating topic of how artificial intelligence thinks and processes information. As AI becomes increasingly prevalent in our daily lives, it's important to understand the mechanisms behind its decision-making processes. What are the algorithms and models that underpin AI, and how do they differ from human thought processes? How do machines learn from data, and what are the limitations of this learning? These are just some of the questions we'll be exploring in this episode. Raphaël will be sharing insights from his work in cognitive science, and discussing the latest developments in this rapidly evolving field. So join us as we dive into the mind of artificial intelligence and explore how it thinks.

[The above introduction was artificially generated by ChatGPT.]

raphael milliere

Support Mindscape on Patreon.

Raphaël Millière received a DPhil in philosophy from the University of Oxford. He is currently a Presidential Scholar in Society and Neuroscience at the Center for Science and Society, and a Lecturer in the Philosophy Department at Columbia University. He also writes and organizes events aimed at a broader audience, including a recent workshop on The Challenge of Compositionality for Artificial Intelligence.

8 thoughts on “230 | Raphaël Millière on How Artificial Intelligence Thinks”

  1. I am always surprised at the ease that humans (mostly male in these fields) are willing to attribute consciousness to algorithms (AI) but deny it to non human animals!

  2. Pingback: Sean Carroll's Mindscape Podcast: Raphaël Millière on How Artificial Intelligence Thinks - 3 Quarks Daily

  3. The question about whether an AI machine will ever be able to think is, to me, the most important question to be addressed. This question is the hard problem of consciousness. The inner life of humans is a reality that is unexplainable. Self-aware consciousness is what leads to understanding the meaning of experience. Computers do not understand the meaning of anything. It is the human minds that interpret the findings of the algorithms that give them meaning. Computers do not have AHA moments. Computers are very valuable tools that can vastly expand the capabilities and achievements of human beings, but understanding is the purview of self-aware consciousness.

  4. Maria Fátima Pereira

    Um tema que tem vindo a ser muito falado, debatido ultimamente.
    Raphael Millière desenvolveu-o como alguém bem conhecedor do mesmo.
    Mais elucidada sobre tantas questões que tentava procurar respostas.
    Interessante também a sua prudencia e cautela na utilização sua linguagem.
    Como alguém que me considero fisicalista, vou debruçar-me sobre as questões mais filosoficas, emergentes desta tematica.
    Sean Carrroll já nos habituou às suas questões profundas, oportunas, inteligentes.
    Obrigada.

  5. A similar but somewhat deeper question than “can a computer think?” is, “can a computer be conscious?”, can it be aware of its own existence? and can it really experience emotions such as grief, or fear? The article posted below ‘Computer Consciousness’ (Donald D. Hoffman, University of California, Irvine), examines these complex questions. The 2 main differing viewpoints can be categorized under the heading “biological naturalist” and “functionalist”. In a nutshell biological naturalist claim that special properties of brain biology are critical and that any complex system that lacks biology must also lack consciousness. Functionalist on the other hand claim that the critical properties required for consciousness are not fundamentally biological, but functional, and a nonbiological computer could be conscious, if it is properly programmed.

    According to Hoffman, it is likely that technology will evolve to the point where computers behave substantially like intelligent, conscious agents. The question of computer consciousness is whether such sophisticated computers really are conscious, or just going through the motions. The answer will be illuminating not just for the nature of computers but also for human nature.

    https://sites.socsci.uci.edu/~ddhoff/HoffmanComputerConsci ousness.pdf

  6. One of the best thought experiments about knowledge/learning/consciousness is the so-called “Knowledge Argument” (aka Mary’s Room). In 1982 the Australian philosopher Frank Jackson came up with the provocative story about a brilliant neurophysiologist Mary who is an expert in color vision and knows everything ever discovered about its physics and biology but has never actually seen color. If, one day, she sees color for the first time, does she learn anything new? The answer to that question has profound implications, not only for Artificial Intelligence, but could it be that there are fundamental limits to what we can know about something we can’t experience first-hand? And would this mean there are certain aspects of the Universe that lie permanently beyond our comprehension? Or will science and philosophy allow us to overcome our mind’s limitations?

    https://www.youtube.com/watch?v=mGYmiQkah4o

  7. The analogy (at about 27:40) between human evolution and the evolution of large language models, intended to show that it’s at least possible for LLMs to have or acquire amazing capabilities that they weren’t designed to have, sounds like hand-waving to me (although M. Millière suggests that there’s more to it). The idea is that both kinds of evolution are simply optimizing a fitness function; but we can see that humans are able to do all sorts of things that couldn’t have been selected for, so why not LLMs too? Unfortunately, it’s clear the human brain is a Swiss Army knife, with universal applicability, whereas it’s equally clear LLMs are not. Nor is it clear that LLMs have bootstraps that might boost them to something higher, as happened in the evolutionary history of the human race. You might as well argue that the optimization function producing better and better running shoes may well result in unexpected bootstrapping to higher capabilities.

    The discussion is interesting, but after 1:20 or so the issues sound like science fiction to me.

    So far I’m afraid of the social consequences of LLMs, but that’s all. They’re a statistical party trick with tremendous dangers for us, none of which are discussed here.

  8. Nicholas Reitter

    I found this a somewhat valuable and in-depth discussion, as I’ve come to expect from Sean’s podcast, but would still unfortunately characterize it as “semi-hype” – i.e., at least partially infected by the amazing amount of hype around contemporary AI. I was inspired after listening to check out image-generation software “DALL-E 2” (seems one has to pay, now; I didn’t), and to have my most substantive interaction to date with chatbot “GPT-4.”
    My personal sense runs closer to Ted Chiang’s view (mistakenly cited in the podcast as in the _NYTimes_ – it was actually published on 2/9/23 by _The New Yorker_, and is easily searchable online) that AI-chatbots are just lossy snapshots of the internet. Milliere counter this view, simply by insisting that novel features emerge from the chat-bots’ behavior- but novelty is not enough. Should a randomly-selected sample of internet chatter – however ingeniously influenced to seem to respond to a given query – count as intelligence? If intelligence is the ability to learn, the chat-bots’ intelligence should be judged against their ability to learn from their on-going “experience” – and not from just a static set of training-exercises that are used to initialize them.
    There seems to me to be a pretty basic category-mistake going on here, even before we get to nuances about learning, about which reasonable people might well disagree. Though knowledgeable and informative (e.g., about next-token prediction and multi-layered modeling strategies), Milliere doesn’t address the underlying point Chiang is making – which is that the real “creativity” that seems to emerge from AI is in the *training data* (i.e., vast human-centric corpus of data embodied in the contemporary internet), and not so much in the clever algorithms used to simulate human dialog by leveraging this data.

Comments are closed.

Scroll to Top