336 | Anil Ananthaswamy on the Mathematics of Neural Nets and AI

Machine learning using neural networks has led to a remarkable leap forward in artificial intelligence, and the technological and social ramifications have been discussed at great length. To understand the origin and nature of this progress, it is useful to dig at least a little bit into the mathematical and algorithmic structures underlying these techniques. Anil Ananthaswamy takes up this challenge in his book Why Machines Learn: The Elegant Math Behind Modern AI. In this conversation we give a brief overview of some of the basic ideas, including the curse of dimensionality, backpropagation, transformer architectures, and more.

anil ananthaswamy

Support Mindscape on Patreon.

Anil Ananthaswamy received a Masters degree in electrical engineering from the University of Washington, Seattle. He is currently a freelance science writer and feature editor for PNAS Front Matter. He was formerly the deputy news editor for New Scientist, a Knight Science Journalism Fellow at MIT, and journalist-in-residence at the Simon Institute for the Theory of Computing, University of California, Berkeley. He organizes an annual science journalism workshop at the National Centre for Biological Sciences at Bengaluru, India.

7 thoughts on “336 | Anil Ananthaswamy on the Mathematics of Neural Nets and AI”

  1. Pingback: Sean Carroll's Mindscape Podcast: Anil Ananthaswamy on the Mathematics of Neural Nets and AI - 3 Quarks Daily

  2. Grate interview, even though I didn’t follow some of the math, I think that with a bit of study it would be useful to understand, thanks. Is there a book that I could buy that would explain it, I think this would be grate to have .

  3. He nailed it. What a great combo: story teller and guy-who-developed-love-of-math. I followed the narrative and finally grokked ‘attention mechanism’ from all the blogs, podcasts and the famous paper. He rocked it. Fantastic guest! I love your podcast, and he’s in the top 3 best guests ever. And that’s coming from someone who catches about 2/3 of the whole catalog.

  4. I was impressed by the presented clarity of the actual limitation of AI to generate true novelty at an abstract level until I read this article about distillating Kepler’s laws from Tycho’s data set:
    Kepler: The Pioneer of Data Science and AI
    https://www.qeios.com/read/0EEG86.2

    It appears AI already broke throught that barrier too!

  5. I was impressed by the presented clarity of the actual limits of AI to distilate Kepler’s laws at an higher abstract level from Tycho’s bare dataset, until I read this article on the Oeios website:
    “Kepler: The Pioneer of Data Science and AI”

    It appears AI already broke through that cognitive barrier too, if the content is trustworthy.

  6. Anil is an excellent communicator. This was one of my favorite podcasts thus far on your show. It brought many pieces of history and math related to AI together which I’d seen only in fragmented pieces in college, industry, and as a hobbyist interested in AI.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top