68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense

Artificial intelligence is better than humans at playing chess or go, but still has trouble holding a conversation or driving a car. A simple way to think about the discrepancy is through the lens of "common sense" -- there are features of the world, from the fact that tables are solid to the prediction that a tree won't walk across the street, that humans take for granted but that machines have difficulty learning. Melanie Mitchell is a computer scientist and complexity researcher who has written a new book about the prospects of modern AI. We talk about deep learning and other AI strategies, why they currently fall short at equipping computers with a functional "folk physics" understanding of the world, and how we might move forward.

Support Mindscape on Patreon.

Melanie Mitchell received her Ph.D. in computer science from the University of Michigan. She is currently a professor of computer science at Portland State University and an external professor at the Santa Fe Institute. Her research focuses on genetic algorithms, cellular automata, and analogical reasoning. She is the author of An Introduction to Genetic Algorithms, Complexity: A Guided Tour, and most recently Artificial Intelligence: A Guide for Thinking Humans. She originated the Santa Fe Institute's Complexity Explorer project, on online learning resource for complex systems.

9 thoughts on “68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense”

  1. Both Stuart Russel’s “Human Compatible” and Mellanie Mitchel’s book to read in the same week, supplemented with a Mindscape treat… I guess there’s a glitch in the simulation… 😉

  2. Aside from maybe anything by Jeff Hawkins, this is my favorite podcast episode on AI. If anything, I’d have used less anthropomorphic language when referencing computers (e.g. “tune” instead of “learn”), but that’s just my pet peeve 😉 Extra points for releasing this one day before Hubert Dreyfus’ birthday 🙂

  3. Really insightful podcast! I like your left field thinking Sean too – especially on not thinking winning chess/go competitions is remarkable.

    The whole conversation seemed like it deserved to mention the Chinese Room thought experiment! Neural nets and machine learning will never be “thinking”.

  4. Melanie Mitchell mischaracterized convolutional neural networks. What she described, if I understood her correctly, is simply the feedforward calculation in which the inputs are multiplied by the weights. In the early layers of a convolutional neural network, each node receives input from only a small part of the previous layer (which may be the input layer), analogous to a receptive field, and these receptive fields tile the input space. The essential feature of a convolutional network is that the same weights are used for all of these nodes. In learning, the weight changes that would reduce the error are calculated for each of these nodes, then those changes are averaged across all of the nodes, and the average indicated change is made in the single, shared set of weights that all the nodes use.

  5. It is interesting that even though we don’t understand the subjective experience of human beings, we somehow believe that it may be possible to create machines that have subjective experiences. Scientists who are working hard to advance machine computation are doing important work, and are to be congratulated. But, until far more is learned about human intelligence, it seems unlikely that such machines will be created.
    The experience of Helen Keller is instructive. She lost both her sight and her hearing at the age of 19 months. However, as a child, due to the dedicated efforts of a teacher, she learned that the water she felt on the palm of her hand was represented by the word then scribbled on her palm. That single AHA moment led to a lifetime of subjective experience and intellectual achievement. Would it be possible to create a machine that could do that?

  6. Really insightful discussion, thanks Sean. Pearl’s point about cause and effect is fundamental. The key difference between AI and living brains is that living brains are embodied, and they learn cause and effect via induction from their own interactions with the environment (DO something!). Melanie’s suggestion that some key concepts like object permanence might be innate strike me as at best half true and it worst false. More likely object permanence is learned inductively via hands on interactions with objects (though it is likely that the propensity to learn specific concepts is an emergent property of neural wiring). The conclusion is that any human level AI would need to be embodied, and have so many feedback loops of interactions with it’s environment that it would to all intents and purposes be a living thing.

  7. Very good podcast!. I’ve read Melanie Mitchell’s books and she is very clear and provides very good insight. Her Complexity book allowed me to really understand complexity and some of the applications. She provides a good definition on Deep Learning, Deep Neural networks. This is one of my favorite AI podcasts.

  8. Being in a trained physicist and working in the field of machine learning I really enjoyed this episode especially, besides all the other great episodes. Your podcast, Sean, is trully one of the best out there. Melanie and you shortly picked up on one of the upcoming fields of AI research – learning causality. I think its really a great topic and I just wanted to note that there is actually some active research going on this field [1]. One of the driving persons in this area is Bernhard Schölkopf [2], of Max-Planck-Society. From my view, it would be really great to hear an episode with the two of you. Maybe you think the same. Thanks so much!
    1: https://mitpress.mit.edu/books/elements-causal-inference
    2: https://www.is.mpg.de/~bs

  9. “It is interesting that even though we don’t understand the subjective experience of human beings, we somehow believe that it may be possible to create machines that have subjective experiences. Scientists who are working hard to advance machine computation are doing important work, and are to be congratulated.”

    It’s interesting to me that people think there is something magical about meat that can’t be achieved in silicon. As soon as Musk’s new company Neurolink rolls out it’s new chip, we will be able to record neural data at 1000’s of bits per second from each person, then train a neural network with enough data and you may have a conscious machine already.

Comments are closed.

Scroll to Top