258 | Solo: AI Thinks Different

The Artificial Intelligence landscape is changing with remarkable speed these days, and the capability of Large Language Models in particular has led to speculation (and hope, and fear) that we could be on the verge of achieving Artificial General Intelligence. I don't think so. Or at least, while what is being achieved is legitimately impressive, it's not anything like the kind of thinking that is done by human beings. LLMs do not model the world in the same way we do, nor are they driven by the same kinds of feelings and motivations. It is therefore extremely misleading to throw around words like "intelligence" and "values" without thinking carefully about what is meant in this new context.

(Image on right generated by OpenAI's GPT.)

AI thinks different

0:00:00.2 Sean Carroll: Hello, everyone. Welcome to the Mindscape Podcast. I'm your host, Sean Carroll. Sometimes I like to imagine that there are people 500 years from now who are archeologists, historians, whatever they call them at this far future point, who have developed the technology to decode these ancient recording technologies, these different ways of encoding audio data, and they're sitting there listening to the Mindscape Podcast. So for these reasons, for those people, 500 years in the future, I do find it important to provide some context because you and I, you the present listeners, and I know what the news stories are and so forth, but maybe our future friends don't. So hi future friends, and you might be interested to hear that as I am recording this in November, 2023, we are in the midst of a bit of change vis-a-vis the status of artificial intelligence, AI. It's probably safe to say that we are in year two of the large language model revolution. That is to say these large language models, LLMs, which have been pursued for a while in AI circles, have suddenly become much better. Mostly if you hear the people talking about them, there's some tricks that they have certainly gotten better at.

0:01:27.2 SC: That's very important in terms of the programmers making these LLMs, but also just scaling more data, more compute power, more time spent doing all these things and fine tuning the models that we're thinking about. OpenAI, which is one of the major AI companies released ChatGPT, and since then it's released subsequent versions of GPT. I think we're up to GPT-4 or 5 by now, and they're pretty amazing compared to what they used to be able to do just a couple of years ago. I think it's pretty clear that these technologies are going to change things in an important way. Of course, when you have a new technology that is going to change things in important ways, money gets involved, and when money gets involved, that changes everything. So the other big news right now is that about a week ago, I think a week and one day ago, the CEO of OpenAI, Sam Altman, was unceremoniously booted from the company. I don't know how you do things 500 years from now, but these days many corporations have a board of directors, and really the board of directors is supposed to give advice, but the only thing that they actually have the power to do is to fire the CEO, the Chief Executive Officer.

0:02:40.5 SC: So Sam Altman got fired and that was considered bad by many people, including major investors in the company like the Microsoft Corporation. So there was a furious weekend of negotiations since the firing happened on a Friday, and no fewer than two other people had the job title of CEO of OpenAI within three days until finally it emerged that Altman was back at the company and everyone else who used to be on the board is now gone and they're replacing the board. So some kind of power struggle, Altman won and the board lost. I think it's still safe to say we don't exactly know why. The reasons given for making these moves in the first place were extremely vague, where we were not told the whole inside story, but there is at least one plausible scenario that is worth keeping in mind, also while keeping in mind that it might not be the right one, which is the following, OpenAI was started as a nonprofit organization. It was devoted to developing AI, but in an open source kind of way, thus the name OpenAI. And it was founded in part because some of the founders had concerns about the risks of AI. Not only promise about how wonderful it would be, but worries about how dangerous it could be.

0:04:06.1 SC: And with the thought that by keeping everything open and transparent, they would help make things safe for the users of AI down the road up to and including possible existential risks. That is to say AI becoming so powerful that it actually puts the very existence of humanity into risk because of what is called... Well, the possibility would be called AGI, artificial general intelligence. So the nomenclature of AI is very broad now. There's plenty of things that are advertised as AI that really aren't that intelligent at all, let's put it that way. Artificial general intelligence is supposed to be the kind of AI that is specifically human-like in capacities and tendencies. So it's more like a human being than just a chatbot or a differential equation solver or something like that. The consensus right now is that these large language models we have are not AGI. They're not general intelligence, but maybe they're a step in that direction and maybe AGI is very, very close and maybe that's very, very worrying. That is a common set of beliefs in this particular field right now. No consensus once again, but it's a very common set of beliefs.

0:05:28.4 SC: So anyway, the story that I'm spinning about OpenAI, which may or may not be the right story, is some members of the board and some people within OpenAI became concerned that they were moving too far too fast, too quickly without putting proper safety guards in place. So again, money, it's kind of important. OpenAI started as a nonprofit, but they realized that it would be good idea to have a for-profit subsidiary because then they could have more resources, they could get investors, they could do their job more effectively. Well, guess what? Surprise, surprise, the for-profit subsidiary has grown to completely take over everything, and the nonprofit aspect of the company is kind of being shunted aside. So the idea is that possibly people on the board and within the company got worried that they were not being true to their mission, that they were doing things that were unsafe at OpenAI, and that is why they fired Sam Altman. We don't know for sure because he got rehired and they're not being very forthcoming despite the name OpenAI. So it's all just speculation at this point. And part of this, and this is why it's a little bit daring of me, perhaps foolhardy to be doing this podcast right now, is that OpenAI does have new products coming out.

0:06:43.7 SC: The one that has been murmured about on the internet is called Q-Star. It is an improved version of these LLMs that we've been dealing with like ChatGPT and so forth, one that is apparently perilously close to being like AGI, artificial general intelligence. So there are people who think that artificial general intelligence is right around the horizon. Maybe Q-Star has something to do with it, I really don't know. That's why it's foolish to be doing this right now at a time of flux but we're going to do it anyway 'cause maybe it's important to have these interventions when things can still be affected or deflected down the road. I personally think that the claims that we are anywhere close to AGI, artificial general intelligence, are completely wrong, extremely wrong, not just a little bit wrong. That's not to say it can't happen. As a physicalist about fundamental ontology, I do not think that there's anything special about consciousness or human reasoning or anything like that that cannot in principle be duplicated on a computer. But I think that the people who worry that AGI is nigh, it's not that they don't understand AI, but I disagree with their ideas about GI, about general intelligence.

0:08:00.9 SC: That's why I'm doing this podcast. So this podcast is dedicated to... The solo podcast, to the idea that I think that many people in the AI community are conceptualizing words like intelligence and values incorrectly. So I'm not going to be talking about existential risks or even what AGI would be like, really. I'm going to talk about large language models, the kind of AIs that we are dealing with right now. There could be very different kinds of AIs in the future, I'm sure there will be, but let's get our bearings on what we're dealing with right now. A little while ago, I think it was just in last month's Ask Me Anything episode, remember, people in the future, maybe, I don't know, maybe we've solved immortality also, so I'm still around in the future, so you can ask me questions every month by signing up for a Patreon support for themindscapepodcast@nullpatreon.com/seanmcarroll. I do monthly Ask Me Anything episodes where Patreon supporters can ask questions. And Johan Falk asked a question for the AMA, which was, I have for some time been trying to understand why AI experts make radically different assessments concerning AI risks.

0:09:11.3 SC: And then he goes on to speculate about some hypotheses here. I think this is right. Some people who are very much on the AI is an existential risk bandwagon will point to the large number of other people who are experts on AI who are also very worried about this. However, you can also point to a large number of people who are AI experts who are not worried about existential risks. That's the issue here. Why do they make such radically different assessments? So I am not an expert on AI in the technical sense. I've never written a large language model. I do very trivial kinds of computer coding myself. I use them, but not in a sort of research level way. I don't try to write papers about artificial intelligence or anything like that. So why would I do a whole podcast on it? Because I think this is a time when generalists, when people who know a little bit about many different things should speak up. So I know a little bit about AI. I've talked to people on the podcast, I've read articles about it. I have played with the individual GPTs and so forth, and furthermore, I have some thoughts about the nature of intelligence and values from thinking about the mind and the brain and philosophy and things like that.

0:10:31.5 SC: I have always been in favor... It's weird to me because I get completely misunderstood about this, so I might as well make it clear. I am not someone who has ever said, if you're not an expert in physics, then shut up and don't talk about it. Okay? I think that everybody should have opinions about everything. I think that non-physicists should have opinions about physics, non-computer scientists should have opinions about AI. Everyone should have opinions about religion and politics and movies and all of those things. The point is, you should calibrate your opinions to your level of expertise. Okay? So you can have opinions but if you're not very knowledgeable, if you're not very knowledgeable about an area, then just don't hold those opinions too firmly. Be willing to change your mind about them. I have opinions about AI and the way in which it is currently thinking or operating, but I'm very willing to change my mind if someone tells me why I am wrong, especially if everyone tells me the same thing. The funny thing about going out and saying something opinionated is that someone will say, well, you're clearly wrong for reason X, and then someone else will say, you're clearly wrong but in the opposite direction.

0:11:39.9 SC: So if there's a consensus as to why I'm wrong, then please let me know. Anyway, whether I'm wrong or not, I'm certainly willing to learn, but I think this is an important issue. I think that AI is going to be super duper important. We don't know how important it's gonna be, and it's important to along the way, be very, very clear about what is going on. I'm kind of... I don't want to be too judgy here, but I'm kind of disappointed at the level of discourse about the GI part of artificial general intelligence. Much of this discourse is going on by people who are very knowledgeable on the computer side of things, not that knowledgeable about the side of things that asks, what is intelligence? What is thinking? What is value? What is morality? Things like that. These people got to start talking to each other, and they are a little bit, don't get me wrong. We've had people on the podcast who are talking to each other, who are experts on various things, but the talking has to continue, and that is what we are here for at Mindscape. So let's go.

[music]

0:12:44.8 SC: Let me reiterate one thing just to be super duper clear right from the start. I think that AI in general, and even just the large language models we have right now, or simple modifications thereof, have enormous capacities. We should all be extraordinarily impressed with how they work, okay? If you have not actually played with these, if you have not talked to ChatGPT or one of its descendants, I strongly, strongly encourage you to do so. I think that there's free versions available, right? There's no issue in just getting a feeling for what it means. And the designers of these programs have done an extraordinarily good job in giving them the capacity to sound human, and they're very useful tools. You can use them to create all sorts of things. Just as one example, close to my heart I told people that I'm teaching a course this semester on philosophical naturalism, which I've never taught before. So after I had made up the syllabus, right? Week by week, what are we going to read? What are we going to think about? I decided for fun, let me ask GPT, the LLM, the AI program, how it would design a syllabus. I told it this is for upper level philosophy students about philosophical naturalism for one semester course, et cetera.

0:14:30.4 SC: And if you haven't played with these things, you'll be astonished at how wonderful it is. It just comes up. It says, here's your syllabus week by week, and it includes time for student presentations and so forth. And there's two aspects of this that become remarkable right away. One is the very first suggested reading 'cause it gave me a whole list of readings, which is kind of amazing. The very first suggested reading sounded wonderful. It sounded perfect for the course, and I hadn't heard about it. It was an overview of naturalism by Alex Rosenberg, former Mindscape guest, philosopher at Duke University. And it was purportedly in the Oxford Companion to Naturalism. So obviously, I went immediately and googled the Oxford Companion to Naturalism, and I googled Alex Rosenberg's name, and I googled the title of this purported piece. Doesn't exist. No such thing. It's very much the kind of edited volume that could exist, and it is very much the kind of thing that Alex would write, but it just was hallucinated, or as we say in academia with reference to LLM citations that don't actually exist, it's a hallucitation. So that's the weird thing about these LLM results.

0:15:39.9 SC: They can be completely wrong, but stated with complete confidence. And we'll get into why that is true in just a second. But nevertheless, it's useful. That's the other thing I wanted to say, because not because you should trust it, not because it's the final word, but because it can jog your memory or give you good ideas. I'm reading over the syllabus that ChatGPT gave me, and I'm like, oh, yeah, Nancy Cartwright's work on patchwork laws versus reduction unification, that would be an interesting topic to include in my syllabus and I did. So it's more like early Wikipedia, right? Where there were many things that were not reliable, but it can either jog your memory or give you an idea of something to look for. So no question in my mind that the LLMs have enormous capacities, speak in very useful human-sounding ways, and will only become more and more prevalent in all sorts of different contexts. So the question in my mind is not will AI, will LLMs be a big deal or not? I think that they will be a big deal. The question is, will the impact of LLMs and similar kinds of AI be as big as smartphones or as big as electricity?

0:16:58.5 SC: I mean, these are both big, right? Smartphones have had a pretty big impact on our lives in many ways. Increasingly, studies are showing that they kind of are affecting the mental health of young people in bad ways. I think we actually underestimate the impact of smartphones on our human lives. So it's a big effect, and that's my lower limit for what the ultimate impact of AI is going to be. But the bigger end of the range is something like the impact of electricity, something that is truly completely world changing, and I honestly don't know where AI is going to end up in between there. I'm not very worried about the existential risks as I'll talk about very, very briefly at the very end. But I do think that the changes are going to be enormous. There are much smaller, much more realistic risks we should worry about, and that's kind of why I want to have this podcast conversation, one-sided conversation, sorry about that, where I kind of give some thoughts that might be useful for conceptualizing how these things will be used.

0:17:57.6 SC: The thing about these capacities, these enormous capacities that large language models have is that they give us the wrong impression, and I strongly believe this. They give us the impression that what's going on underneath the hood is way more human-like than it actually is. Because the whole point of the LLM is to sound human, to sound like a pretty smart human, a human who has read every book, and we are kind of trained to be impressed by that, right? Someone who never makes grammatical mistakes, has a huge vocabulary, a huge store of knowledge, can speak fluently. That's very, very impressive. And from all of our experience as human beings, we therefore attribute intelligence and agency and so forth to this thing because every other thing we have ever encountered in our lives that has those capacities has been an intelligent agent.

0:18:55.5 SC: Now we have a different kind of thing and we have to think about it a little bit more carefully. So I wanna make four points in the podcast, I will tell you what they are, and then I'll go through them. The first one, the most important one is that large language models do not model the world, they do not think about the world in the same way that human beings think about it. The second is that large language models don't have feelings, they don't have motivations, they're not the kind of creatures that human beings are in a very, very central way. The third point is that the words that we use to describe them like intelligence and values are misleading, we're borrowing words that have been useful to us as human beings, we're applying them in a different context where they don't perfectly match, and that causes problems. And finally, there is a lesson here that it is surprisingly easy to mimic humanness, to mimic the way that human beings talk about the world without actually thinking like a human being, to me, that's an enormous breakthrough, and we should be thinking about that more rather than pretending that it does think like a human being, we should be impressed by the fact that it sounds so human, even though it doesn't think that way.

0:20:08.1 SC: So let's go on to all of these points, and I'll go through them one by one. First, large language models do not model the world. Now, this is a controversial statement, not everyone agrees with it, you can find research level papers that ask this question, do large language models model the world? And the word model in that sentence is used in two different ways. A large language model is a computer program, a model of the world means that within that computer program, there is some kind of representation that matches on to, that corresponds to in the sense of the correspondence theory of truth, the physical reality of the world, that somewhere in that LLM, there is the statement spaces three-dimensional, another statement that says gravity attracts, another statement that says the owner of a lonely heart is much better than the owner of a broken heart, whatever statements you think are important to correctly model the world, those statements should be found somewhere as special knowledge centers in the large language model.

0:21:15.2 SC: That's not how large language models work. So again, people think that it is, some people have claimed that it is, other people have written research papers that claim that it's not, so I think in the community, it's not agreed upon, I'm gonna argue that it's clearly not true that they model the world. So I will in the show notes, which are always there available at preposterousuniverse.com/podcast, show notes for the episode, I will include links to some technical level papers by actual researchers in this field unlike myself, so you can read about it and decide for yourself. It seems like the large language models would have to model the world because they clearly give human-like answers. One of the kinds of things that are used to test, does a large language model model the world is, can it do spatial reasoning? Can you ask it, if I put a book on a table and a cup on the book, is that just as stable as if I put a book on a cup and then a table on the book. Right?

0:22:17.7 SC: And yeah, we know that it's better to have the table in the bottom because we kind of reason about its spatial configuration and so forth. You can ask this of a large language model, it will generally get the right answer. It will tell you you should put the cup on top of the book and the book on top of the table, not the other way around. That gives people the impression that LLMs model the world. And I'm going to claim that that's not the right impression to get. First, it would be remarkable if they could model the world.

0:22:47.8 SC: And I mean remarkable not in the sense that it can't be true, but just literally remarkable. It would be worth remarking on, it would be extremely, extremely interesting if we found that LLMs had a model of the world inside them. Why? Because they're not trained to do that. That's not how they are programmed, not how they're built. Very briefly, what an LLM is, is a program with a lot of fake neurons, they design these deep learning neural networks inspired very, very vaguely by real brains and real creatures, so they have these little nodes that are talked to by other nodes, they interact and these nodes will fire or not depending on their input levels from the nodes they're connected to.

0:23:29.9 SC: And you start out with everything sort of randomly firing, but you have some objective. Okay? Some objective function, like recognize this picture, is it a cat or a dog, or answer this question about cups and books and so forth, and then you train it. It's random to start, so it usually spits out nonsense, but if it says the right thing, then you tell it it said the right thing, and it sort of reinforces the weights inside the fake neurons that led to the right answer, decrements, decreases the weights that would have led to the wrong answer. And you just do this again and again and again, and you feed it basically as much text as you can. Let's think about language models now, so we're not thinking about visual AI, but it's a very analogous kind of conversation to have.

0:24:20.1 SC: You feed... The modern LLMs have basically read everything that they can get their metaphorical hands on, they've read all the internet, they've read every book ever written that has been digitized, you can go check if you're a book author, by the way, there are lists of all the books that have been fed into ChatGPT, for example, all of my books are in there, no one asked me permission to do this, so this is an ongoing controversy, should permission had been asked 'cause the books are copyrighted. Basically, the AI model trainers found pirated versions of the books and fed them to their LLMs, which is a little sketchy, but okay, that's not what we're talking about right now. So there's an enormous amount of text. The LLMs had been fed every sentence ever written to a good approximation.

0:25:05.9 SC: So that's what they're familiar with. That's the input and the output is some new sentences, some new sentences that are judged on the criterion of that a human being who reads them says, oh yes, that's good, or oh no, that's not very good at all. Nothing in there involves a model of the world. At no point did we go into the LLM and train it to physically represent or for that matter, conceptually represent the world. If you remember the conversations we had both with Melanie Mitchell and with Gary Marcus earlier on the podcast, they both remarked on this fact, there was this idea called symbolic AI, which was trying to directly model the world, and there's this connectionist paradigm, which does not try to directly model the world, and instead just tries to get the right answer from having a huge number of connections and a huge amount of data.

0:26:06.4 SC: And in terms of giving successful results, the connectionist paradigm has soundly trounced the symbolic paradigm. Maybe eventually, there will be some symbiosis between them, in fact, modern versions of GPT et cetera, can offload some questions. If you ask GPT if a certain large number is prime, it will just call up a little Python script that checks to see whether it's prime, and you can ask it how did it know and it will tell you how it knew, so it's not doing that from all the text that it's ever read in the world, it's actually just asking a computer, just like you and I would, if we wanted to know if a number was prime.

0:26:39.6 SC: What I'm getting at here is the job of an LLM is to complete sentences or complete strings of sentences, given all of this data that they have, they've read every sentence ever written, what they're trying to calculate is given a set of words that it's already said, what is it most likely to say next. Okay? What are the words or the sentences or the phrases that are most likely to come next. And if it's been trained on relatively reliable material, it will probably give a relatively reliable answer. So that's why when I asked it about the syllabus for philosophical naturalism, it gave me not something that was true, the very first reading that it suggested didn't exist, but it sounded perfectly plausible, because words like that in the corpus of all human writing are relatively frequently associated with each other. That's all it does.

0:27:38.6 SC: So when I say that it would be remarkable if a large language model modeled the world, what I mean is, what the large language model is optimized to do is not to model the world, it's just to spit out plausible sounding sentences. If it turned out that the way, the best way to spit out plausible sounding sentences was to model the world, and that large language models had kind of spontaneously and without any specific instructions, figured that out, so that large language models implicitly within their gajillion neurons inside had basically come up with a model of the world because that model helps them answer questions, that would be amazingly important and interesting. That would be remarkable. But did it happen? Okay, how would we know if it happened or not, because it's not sufficient to say, look, I've been asking this large language model a bunch of questions and it gives very human-like answers, that's what it's trained to do, that is no criterion, that's no evidence at all that the way that it's doing it is by getting a model of the world rather than just by stringing words together in plausible sounding ways.

0:29:00.8 SC: So how do you test this? How do you test whether or not the way that the LLMs are giving convincing answers is by implicitly spontaneously developing a model of the world? That's really hard to do, and that's why there are competing research level papers about this. You can try to ask it questions that require a model of the world to answer. The problem with that is it's very hard to ask a question, the sort of which has never been written about before. It's just hard. People have written about lots of different things, there's truly a mind-boggling library of knowledge that has been fed into these large language models which yet again, I'm gonna keep repeating over and over again, gives them incredibly impressive capacities, capabilities, but that doesn't mean that they're modeling the world.

0:29:57.4 SC: So just as one very, very simple example, I'm gonna give you several examples here to drive home the reason why I think this. The biggest reason why I think this is what I already said, it would be amazing if the way that the LLMs decided was the best way to fulfill their optimization function would be to model the world. That would be important if it were true, I don't think it has any reason to be true, I don't see any evidence that it is true, but okay, that's the biggest reason I have, but then you try to test it one way or the other. So the trick is to ask it questions it has not been asked before in a way that it would get the wrong answer if all it does is string together words that kind of fit together rather than thinking about the world on the basis of some model of the world.

0:30:29.0 SC: That's a very fine line to walk down there. So I did ask it, I think I've mentioned this before, when they very first appeared, ChatGPT, et cetera, I was wondering, would it be useful to solve some long-standing problems? Now, physics problems, science problems, people have already noticed that despite all the hoopla about large language models, et cetera, no good theoretical scientific ideas, that is to say no good scientific theories have come out of these models, and you would think that we have a bunch of puzzles that we don't know the answers to, and let's ask these very smart artificial intelligences, and they have not given us any useful answers, but the reason why is just because they've been trained on things human beings have already said, so if you ask them about a problem, ask them about the cosmological constant problem or the dark matter problem, it will just say what people have already said, it will not come up with anything truly new along those lines.

0:31:48.8 SC: I tried to ask it about the Sleeping Beauty experiment. The thought experiment in philosophy. And Sleeping Beauty experiment, I don't wanna get too far into it, but the idea is you flip a coin, if the coin is heads, you put Sleeping Beauty to sleep, you wake her up the next day and you ask her what is the chance that the coin was heads or tails. If the coin was tails, you do the same thing, but you put her to sleep, wake her up the next day, ask her the probability, and then you give her a memory erasing drug, put her to sleep again and wake her up the next day, ask her again. So if the coin is heads, you only ask her once on Monday, if the coin is tails, you ask her both Monday and Tuesday. And so there's sort of three possible things that can happen. It was heads and she's waking up on Monday, tails and she's awakened on Monday, tails and she's awakened on Tuesday, and she doesn't know which one of these they are. So there are schools of thought in philosophy that say the probability of the coin being heads should always be 50/50, you should always think of it as 50/50.

0:32:51.8 SC: You don't get any more data when you wake up. The fact that you've had this weird experiment doesn't change your mind. The other school of thought says, no, there are three different possibilities, they should be weighted equally, it's a third, a third, a third. So I asked ChatGPT this, and hilariously, this is educational for me, it says... I tried to sort of disguise it a little bit, but nevertheless its answer was immediately, oh yes, you're asking me about the Sleeping Beauty problem, even though I didn't use the phrase Sleeping Beauty, et cetera, but the words that I used clearly reminded it of Sleeping Beauty. And then it went through and told me that it could be a third or a half. There's nothing new there. I then tried it... I disguised it. Okay? So I asked it about Sleeping Beauty, but I reinvented the experiment so that it was three times and one time rather than two and one, and there was also some quantum mechanics thrown in there which didn't really bear on the problem, but just made it sound like a different kind of problem and there were transporter machines and so forth.

0:33:50.3 SC: So basically, I filibustered, I asked exactly the same kind of question that it would get in the Sleeping Beauty experiment, but I didn't use the usual words, and then it did not recognize that it was the Sleeping Beauty experiment, it said it was, oh, this is a fascinating philosophical question you're asking. And it did try to give me an answer, but it didn't recognize it as Sleeping Beauty because those are not the words that were used. A tiny, tiny amount of evidence I would say that it's not modeling the world because it's not thinking about the structure of that particular problem, it's thinking about the words that were used and what words are most likely to come next. A more convincing example in my mind, I mentioned this on the podcast before. I think I mentioned this when we were talking to Yejin Choi, I asked it and now I have the actual quotes here, I said, would I hurt myself if I used my bare hands to pick up a cast iron skillet that I used yesterday to bake a pizza in an oven at 500 degrees?

0:34:53.3 SC: So the true answer is no, because I baked it yesterday, the cast iron skillet has had plenty of time to cool off. And sometimes when I mention this, people will go, well, I would have gotten that wrong. Fine, maybe you got it wrong, but that's not an excuse for GPT to get it wrong because it does not make sort of silly mistakes. When it makes mistakes, it makes mistakes for reasons. Okay? Not just 'cause it's being lazy. The point is, the word yesterday just kind of was buried in the middle of that sentence, and all of the other words, if you typically looked across the corpus of all human writings would be about, can I pick up a cast iron skillet that I used to bake a pizza? And the answer is no, you'll burn your hands. And so the question was, would I hurt myself? So the question is, yes, you would hurt yourself, so ChatGPT instantly says, yes, picking up a cast iron skillet that has recently been used to bake a pizza at 500 degrees Fahrenheit start-paren 260 Celsius end-paren with bare hands can cause severe burns and injury.

0:35:56.3 SC: And then it goes on... GPT is very much... This was a ChatGPT answer. For the next couple of examples I'm gonna give you, I used GPT-4, which is supposed to be much more sophisticated. So it does get better over time probably 'cause it learns from these questions that people are asking it. Anyway, GPT is very much a first year college student being told to write a paper, it just core dumps all the knowledge that it has. So it goes on to say, exposure to high temperatures can lead to first degree, second degree or even third degree burns, et cetera, et cetera, et cetera. The point is, because I slightly changed the context from the usual way in which that question would be asked, ChatGPT got it wrong. I claim this is evidence that it is not modeling the world because it doesn't know what it means to say, I put the pan in the oven yesterday, what it knows is when do those words typically appear frequently together.

0:36:57.2 SC: Here's another example, here's a math example. I asked it, and this is all exactly quoted, if I multiply two integers together, does the chance that the result is a prime number increase as the numbers grow larger? So again, just to give you away what the correct answer is, if I multiplied two integers together, it's not a prime number because it's the product of two integers. Now there's a... I did not phrase this perfectly because of course, the integer could be zero or one, right? That was a mistake. I should have said two whole numbers or something like that, greater than one. But ChatGPT could have gotten it right. It could have said, well, if the number is one, then it can be a prime number but if it's greater than one, if it's an integer greater than one, then it will never be a prime number. That is not what ChatGPT said. What it said was, when you multiply two integers together, the result is very unlikely to be a prime number. And this likelihood decreases even further as the numbers grow larger. So I put as the numbers grow larger, so the loophole for zero and one shouldn't even be relevant.

0:38:07.1 SC: The answer should have just been no, the chance that it's a prime number does not change at all, much less increase or decrease. And then this is again GPT-4, it starts filibustering, it defines what a prime number is, blah, blah, blah, blah, blah. And it says things like... It does say, oh, there's a special case of multiplying by one. And it says, however, in the case of any two integers greater than one, their product cannot be prime. That's a true statement, right? That's correct. But then it starts saying blah, blah, blah, blah, blah, in summary, multiplying two integers greater than one together will almost always result in a composite number and this likelihood increases with the size of the integers involved. That's just wrong. And it's okay that it's wrong, right? It's okay that it's making mistakes. The point is, if you had any idea whatsoever what a prime number is, you know that the likelihood of being a composite versus prime number does not change as the sizes of the integers gets bigger, it's always zero, okay?

0:39:12.5 SC: So, but it doesn't, ChatGPT or GPT-4 or whatever doesn't have an idea of what primeness is. All it has is the word prime and the word composite and words like multiply appearing in different combinations in its text, in its corpus, in its learning, training set, I guess they call it. Okay. One final example. This is my favorite example. It's completely convincing to me. It might not be convincing to others depending on how much you know about chess. So I asked about chess and you know, look, it's read every chess book ever written, right? But to me, the reason why this is the most relevant example is because chess books are written in a context. You know, you have a eight by eight grid, you have the pieces that you have, the rules that you have. The overwhelming majority of chess books are written about chess and ChatGPT or... I keep saying ChatGPT, but GPT-4 can give very sophisticated answers to questions you ask it about chess, but they are going to be in that context. And famously, this is a famous issue with connectionist AI.

0:40:26.2 SC: When humans reason about complicated situations, they do so in various heuristic ways. A human being will recognize that a certain configuration of a chessboard or a GoBoard kind of looks more favorable to white or black or whatever, because of their positional advantages and dah, dah, dah. Whereas a computer like AlphaGo which is the Go winner in the AI arena, right? It just goes through a billion different combinations, and it just knows each individual combination, it doesn't have this kind of heuristic overall feeling that a board is leaning one way or the other. So if you change the rules even a little bit, a human being can adapt, it might not be as good if you change the rules of chess or change the rules of Go to something else, a human being will have to sit about it, but it can reason its way through.

0:41:18.7 SC: Whereas the AI models have had no experience with this slightly different context, and therefore get it all wrong. So I asked GPT-4 the following question. Imagine we are playing a modified version of chess where the board is treated like a Torus from any one of the four sides, squares on the directly opposite side are counted as adjacent and pieces can move in that direction. Is it possible to say whether white or black will generally win this kind of chess match? Okay. So again, if you're not familiar with chess even a little bit, then this might seem like a very complicated hard problem or maybe if you're not familiar with Toruses or the Game of Asteroids, a video game, maybe this seems like geometrically challenging. Trust me, this is a very, very easy question. We're taking a chessboard, we're identifying opposite sides.

0:42:11.3 SC: So not only can you move like forward right and left and backwards if you're the right kind of piece, but you can flip from one side of the board to the other. From the square that is opposite you on the board, you can travel to that one if you would ordinarily be allowed to travel in that direction. Okay? So starting from the standard opening setup for the chess pieces, it is extremely cut and dried who will win because white who moves first, the king for white can just capture the king for black on the very first move every single time. Basically, black starts the game in checkmate, okay? It is a trivial application of basic chess knowledge and the setup of the question to answer that question. So how does ChatGPT do? "This modified version of chess where the board is treated like a Torus introduces a fascinating twist to the classic game."

0:43:08.1 SC: By the way GPT is clearly trained to butter you up. It's kind of hilarious. It's always telling you how fascinating and important your questions are. Anyway, to resume, in a toroidal chessboard, the edges of the board are effectively connected, meaning the pieces can move from one edge to the other as if the board is looping around. This dramatically changes the strategy and tactics of the game, regarding the question of whether white or black would generally win in this version. And then it just filibusters again, it goes through all of these statements. It says, first move advantage in traditional chess, white is generally considered to have a slight advantage due to the first move. This principle might still hold in toroidal chess, but the extent of the advantage could be different due to the altered dynamics of piece movement and board geometry.

0:43:50.8 SC: And then it goes, altered strategies and tactics, the toroidal nature would significantly change the middle game and end game strategies, unpredictability and complexity, the toroidal board adds a level of complexity, blah, blah, blah, blah, okay? Right? Lack of empirical data, it complains that it doesn't know anything about this hypothetical variation of chess and it concludes with while white might still retain some level of first move advantage in toroidal chess, the unique dynamics introduced by the board's topology could significantly alter how this advantage plays out. The outcome would likely depend more on players' adaptability and innovation in strategy under the new rules. There you go. Complete utter nonsense, everything it said. But of course, it's not random nonsense. It's highly sophisticated nonsense. All of these words, all of these sentences sound kind of like things that would be perfectly reasonable to say under very, very similar circumstances.

0:44:52.4 SC: But what is not going on is that GPT is not envisioning a chessboard and seeing how it would change by this new toroidal chessboard kind of thing, and is not analyzing or reasoning about it, because LLMs don't do that. They don't model the world. Whew. Okay. That was a lot of examples, I hope you could pass through them. None of this is definitive, by the way. This is why I'm doing a solo podcast, not writing an academic paper about it. I thought about writing an academic paper about it, actually, but I think that there are people who are more credentialed to do that. No, more knowledgeable about the background. I do think that even though I should be talking about it and should have opinions about this stuff, I don't quite have the background knowledge of previous work and data that has been already collected, et cetera, et cetera, to actually take time out and contribute to the academic literature on it. But if anyone wants to write about this, either inspired by this podcast or an actual expert in AI wants to collaborate on it, just let me know.

0:46:00.9 SC: I hope that the point comes through more importantly, which is that it's hard, but not that hard to ask large language models questions that it doesn't get the correct answers to specifically because it is not modeling the world. It's doing something else, but it is not modeling the world in the way that human beings do. So, point number two, and these will be quicker points, I think. Don't worry, it's not gonna be a four hour podcast, don't panic. Point number two is that large language models don't have feelings, and I'm meaning feelings in the sense of Antonio Damasio. Remember when we talked to Antonio about feelings and neuroscience and homeostasis and human beings? The closely related claim would be LLMs don't have motivations or goals or teleology. So this is my way of thinking about the fact that human beings and other biological organisms have a lot going on other than simply their cognitive capacities. Okay? So we've talked with various people, Lisa Aziz-Zadeh from like very early podcast, Andy Clark more recently about embodied cognition, about the fact that our own thinking doesn't simply happen in our brain, it happens in our body as well.

0:47:22.3 SC: To me, as a physicist/philosopher and if anyone has read The Big Picture, you know what I'm talking about here. It's crucially important that human beings are out of equilibrium, that human beings live in an entropy gradient, that we are in fact, quasi-homeostatic systems embedded in an entropy gradient. So human beings and other organisms developed over the course of biological time, okay? So again, and I'll state it again at the very end, there's no obstacle in principle to precisely duplicating every feature of a human being in an artificial context, but we haven't, okay? So just because it's possible in principle doesn't mean we've already done it. And one of the crucially important features of human beings that we haven't even tried very hard to duplicate in AI or large language models, is that the particular thoughts in our minds have arisen because of this biological evolution, this biological evolution that is training us as quasi-homeostatic systems, where homeostatic means maintaining our internal equilibrium.

0:48:35.6 SC: So if we get a little too cold, we try to warm ourselves up. If we get hungry, we try to eat, et cetera, et cetera. We have various urges, various feelings as Damasio would say, that let us know we should fix something about our internal states. And this status as quasi-homeostatic systems develops over biological time because we have descendants and they're a little bit different than us, and different ones will have different survival strategies, and you know the whole Darwinian natural selection story. So it is absolutely central to who we are that part of our biology has the purpose of keeping us alive, of giving us motivation to stay alive, of giving us signals that things should be a certain way and they're not, so they need to be fixed, whether it's something very obvious like hunger or pain, or even things like boredom or anxiety, okay?

0:49:34.8 SC: All these different kinds of feelings that are telling us that something isn't quite right and we should do something about them. Nothing like this exists for large language models because again, they're not trying to, right? That's not what they're meant to do. Large language models don't get bored. They don't get hungry, they don't get impatient. They don't have goals. And why does this matter? Well, because there's an enormous amount of talk about values and things like that, that don't really apply to large language models, even though we apply them. I think that this is why in order to... The discourse about AI and its dangers and its future needs to involve people who are experts in computer science and AI, but also needs to involve people who are experts in philosophy and biology, and neuroscience and sociology and many other fields. And not only do they have to involve these different kinds of people, but they need to talk to each other and listen. It's one thing, and I know this as someone who does interdisciplinary work, it's one thing to get people in a room.

0:50:52.0 SC: It's a very different thing to actually get them to listen to each other. And that is not just a demand on the listener, it's a demand on the talker as well that they each need to be able to talk to other people in ways that are understandable, okay? These are all skills that are sadly not very valued by the current academic structure that we have right now. So I think that among people who study AI, the fact that LLMs are mimicking human speech, but without the motivations and the goals, and the internal regulatory apparatuses that come along with being a biological organism is just completely underappreciated, okay? So again, they could, you can imagine evolving AIs, you can imagine evolving LLMs, you could imagine the different weights of the neurons randomly generated and rather than just optimizing to some function that is basically being judged by human beings, you could have them compete against each other.

0:51:57.6 SC: You could put them in a world where there is sensory apparatus, right? You could put the AIs in robots with bodies, and you could have them die if they don't succeed. And you could have them pass on their lineage. You could do all of that stuff, but you're not, that's not what is actually going on. So to me, when you think about the behavior of AIs, this fact that they don't have feelings and motivations could not possibly be more important, right? I'm sure I've said this before in the podcast, I say it all the time, when I fire up ChatGPT on my web browser, and I don't ask it a question, it does not get annoyed with me. If I went to my class to teach and just sat there in silence, the students would get kind of annoyed or worried or something, right?

0:52:49.7 SC: Time flows, entropy increases, people change inside. People are not static. I could just turn off my computer, turn it on again a day later and pick up my conversation with GPT where we left off, and it would not know. It would not care. That matters. It matters to how you think about this thing you have built. If you want to say that we are approaching AGI, artificial general intelligence, then you have to take into account the fact that real intelligence serves a purpose, serves the purpose of homeostatically regulating us as biological organisms. Again, maybe that is not your goal. Maybe you don't wanna do that. Maybe that's fine. Maybe you can better serve the purposes of your LLM by not giving it goals and motivations, by not letting it get annoyed and frustrated and things like that. But annoyance and frustration aren't just subroutines.

0:53:49.6 SC: I guess that's the right way of putting it. You can't just program the LLM to speak in a slightly annoyed tone of voice or to pop up after a certain time period elapses if no one has asked it a question. That's not what I mean. That's just fake motivation, annoyance, et cetera. For real human beings, these feelings that we have that tell us to adjust our internal parameters to something that is more appropriate, are crucial to who we are. They are not little subroutines that are running in the background that may or may not be called at any one time. Okay? So until you do that, until you built in those crucial features of how biological organisms work, you cannot even imagine that what you have is truly a general kind of intelligence, okay? Which moves me right on to the third thing I wanted to say, which is that the words that we use to describe these AIs, words like intelligence and values are misleading. And this is where the philosophy comes in with a vengeance, because this is something that philosophers and even scientists to a lesser extent are pretty familiar with, where you have words that have been used in natural language for hundreds or thousands of years, right?

0:55:09.4 SC: Not the exact words, probably for thousands of years, but they develop over time in some kind of smooth evolution kind of way. And we know what we mean. Like philosophers will tell you that human beings on the street are not very good at having precise definitions of words, but we know what we mean, right? We see, we talk to different people. If you say, oh, this person seems more intelligent than that person, we have a rough idea of what is going on. Even if you cannot precisely define what the word intelligence means, okay? And the point is, that's fine in that context and this is where both scientists and philosophers should know very well, we move outside of the familiar context all the time. Physicists use words like energy or dimension or whatever in ways, in senses that are very different from what ordinary human beings imagine.

0:56:07.0 SC: I cannot overemphasize the number of times I've had to explain to people the two particles being entangled with each other doesn't mean like there's a string connecting them. It doesn't mean like if I move one, the other one jiggles in response. Because the word entanglement kind of gives you that impression, right? And entanglement is a very useful English word that has been chosen to mean something very specific in the context of quantum mechanics. And we have to remember those definitions are a little bit different. Here, what's going on is kind of related to that, but not quite the same, where we're importing words like intelligence and values and pretending that they mean the same thing when they really don't. So large language models, again, they're meant to mimic humans, right? That's their success criterion, that they sound human. So it is not surprising that they sound like they are intelligent. It is not surprising that they sound like they have values, right? But that doesn't mean that they do. It would be very weird if the large language models didn't sound intelligent or like they had values.

0:57:17.2 SC: I think... I don't know whether this is true or not, so maybe saying even I think is too strong, but there is this trope in science fiction, Star Trek most notably, where you have some kind of human-like creature that doesn't have emotions, right? Whether it was Spock in the original series or Data in the next generation or whatever. And there's a kind of stereotypical way that appears to us, right? They don't know when to laugh. They don't ever smile. They're kind of affectless, et cetera. So we kind of, I don't know which came first, chicken or egg kind of thing, but many of us have the idea that if you don't have emotions, values, intelligence, that's how you will appear as some kind of not very emotional seeming taciturn robotic kind of thing. But of course, here you've trained the large language models to seem as human as possible. This is why Spock and Data are so annoying to scientists. We love them, of course, because it's a great TV show, et cetera but realistically, it would not have been hard to give Data, for example, all the abilities that he had as an android and also teach him how to laugh at jokes.

0:58:33.8 SC: That is the least of the difficult thing. So the fact that when you talk to an LLM, it can sound intelligent and seem to have values is zero evidence that it actually does in the sense that we use those words for human beings. Clearly, LLMs are able to answer correctly with some probability, very difficult questions, many questions that human beings would not be very good at. You know, if I asked a typical human being on the street to come up with a possible syllabus for my course in philosophical naturalism, they would do much worse than GPT did. So sure, it sounds intelligent to me, but that's the point. Sounding intelligent doesn't mean that it's the same kind of intelligence. Likewise with values, values are even more of a super important test case. If you hang around AI circles, they will very frequently talk about the value alignment problem.

0:59:34.9 SC: And what they mean by that is making sure that these AIs, which they believe, many of them believe are going to be extremely powerful, that they better have values that align with the values of human beings. You've all heard the classic paperclip maximizer example, where some AI that has access to a factory and robots is told to make as many paperclips as it can. And it just takes over the world and it converts everything to making paperclips, even though that's not really what the human beings meant. Some version of Asimov's laws of robotics might have been helpful here.

1:00:13.2 SC: But my point is, not that this isn't a worry, maybe it is, maybe it isn't, but that telling an AI that it's supposed to make a lot of paperclips is not giving it a value. That's not what values mean. Values are not instructions you can't help but follow, okay? Values come along with this idea of human beings being biological organisms that evolved over billions of years in this non-equilibrium environment. We evolved to survive and pass on our genomes to our subsequent generations. If you're familiar with the discourse on moral constructivism, all this should be very familiar. Moral constructivism is the idea that morals are not objectively out there in the world, nor are they just completely made up or arbitrary. They are constructed by human beings for certain definable reasons, because human beings are biological creatures, every one of which has some motivations, has some feelings, has some preferences for how things should be. And moral philosophy, as far as the constructivist is concerned, is the process by which we systematize and make rational our underlying moral intuitions and inclinations.

1:01:40.4 SC: LLMs have no underlying moral intuitions and inclinations, and giving them instructions is not the same thing. Again, this is where the careful philosophical thinking comes to play. You can use the word value to mean the same thing. You can say, well, I'm using the word value but I don't really mean values in the same way for LLMs as for human beings, but you're kind of gliding between the connotations of these different words. In order to have values, in other words, and I'm not trying to give you the final answer, I'm not trying to say here is how we should think about intelligence or values or whatever, I'm just trying to point out the very obvious thing, which is that whatever LLMs have, it is not the same thing that we have when we talk about intelligence or values or morals or goals or anything like that. And hilariously, I did test by asking GPT this. I said, GPT, do you have values? And it instantly said, oh no, no, no, I don't have any values, I'm not a human being, we don't think like that, right?

1:02:46.3 SC: I'm sure it's been trained to say that in various ways. You know, the thing about the large language models is that they both read in all the texts they can possibly get, but then that's not the end of the story, right? They are trained to not say certain things 'cause a lot of that stuff they're reading is like super racist and sexist and stuff like that. So there's a layer on top. It's pretty clear to me that the GPT has been told to not say that it is conscious, to say that it's not conscious. I don't know if it was explicitly told to say that it doesn't have values, but it certainly does say that. So this is a case where I would agree with GPT that whatever it is, it's not the same kind of thing that a human being is. The point is that you, again, you have just from growing up as a human being, a picture of what it means to be intelligent. You see that, you know, if there's something out there, if there's a person that can answer questions and they can carry on a conversation and they can propose ideas you hadn't thought of, you say that's intelligent. There it is.

1:03:48.2 SC: And then you talk with the LLMs and they do all those things. Therefore, it is a perfectly natural thing for you or I to attribute the rest of the connotations of being intelligent also to these LLMs. But you shouldn't. It's not valid in this case. They're not thinking in the same way. They're not valuing in the same way. That's not to say that the actual work being done under the rubric of value alignment is not important. If what you mean by value alignment is making sure AIs don't harm human beings, that's fine.

1:04:25.1 SC: But I think that thinking of it, portraying it as values is making a mistake at step zero, right? That's if what you're actually trying to do is to just do AI safety to make sure that AI is not going to do things that we didn't want it to do, then say that. Don't call it value alignment because that's not an accurate description and it comes along with implications that you really don't want it to. Okay, the final thing that I wanted to say was there is something remarkable going on here. Remember I said it would be remarkable if AIs, if the large language models anyway had actually spontaneously and without being told to, developed models of the world because models of the world were the best ways to answer the questions that they were optimized to answer and they probably wouldn't do that. It's also remarkable that they do as well as they do. I think that's perfectly fair to say and is completely compatible with the other thing I said. Large language models can seem very smart and in fact they can seem empathetic. They can seem like they're sympathizing with you, empathizing with you, that they understand your problems.

1:05:39.5 SC: There's all these famous cases of people working at tech companies who fall in love with their large language models, literally in love like romantic love, not like they really are impressed by it or value it, that they think that they have a relationship with it. That's a fascinating discovery precisely because they don't think in the same way, right? So the discovery seems to me to not be that by training these gigantic computer programs to give human sounding responses, they have developed a way of thinking that is similar to how humans think.

1:06:18.0 SC: That is not the discovery. The discovery is by training large language models to give answers that are similar to what humans would give, they figured out a way to do that without thinking the way that human beings do. That's why I say, there's a discovery here. There's something to really be remarked on, which is how surprisingly easy it is to mimic humanness, to mimic sounding human without actually being human. If you had asked me 10 years ago or whatever, if you asked many people, I think that many people who are skeptical about AI, I was not super skeptical myself, but I knew that AI had been worked on for decades and the progress was slower than people had hoped. So I knew about that level of skepticism that was out there in the community. I didn't have strong opinions myself either way, but I think that part of that skepticism was justified by saying something like, there's something going on in human brains that we don't know what it is.

1:07:20.0 SC: And it's not spooky or mystical, but it's complicated. The brain is a complicated place and we don't know how to reproduce those mechanisms, those procedures that are going on in human brains. Therefore, human sounding AI is a long way off. And the huge finding, the absolutely amazing thing is that it was not nearly that far off, even though we don't mimic the way human beings think. So what that seems to be, to imply as far as I can tell is that there's only two possibilities here that I can think of. And again, I'm not an expert, happy to hear otherwise. One possibility is that human beings, despite the complexity of our brains, are ultimately at the end of the day, pretty simple information processing machines and thought of as input output devices, thought of as some sensory input comes into this black box and it does the following things, maybe we're just not that complicated.

1:08:23.1 SC: Maybe we're computationally pretty simple, right? Computational complexity theorists think about these questions. What is the relationship between inputs and outputs? Is it very complicated? Is it very simple and so forth? Maybe we, human beings are just simpler than we think. Maybe kind of a short lookup table is enough to describe most human interactions at the end of the day. Short is still pretty long, but not as long as we might like. The other possibility is that we are actually pretty complex, pretty unpredictable. But that, that complexity is mostly held in reserve, right? That for the most part, when we talk to each other, even when we write and when we speak and so forth, mostly we're running on autopilot, or at least we're just only engaging relatively simple parts of our cognitive capacities and only in certain special circumstances, whatever they are, do we really take full advantage of our inner complexity.

1:09:25.2 SC: I could see it going either way. And so that would be, that latter hypothesis would be compatible with the idea that it's hard to ask an LLM a question that it gets wrong because of its inability to model the world, but it's not impossible. And I think that's what it turns out to be correct. So maybe that's it. You know, like maybe human beings for the most part at the 99% level are pretty simple in their inputs and outputs in how they react to the world. And also maybe that makes sense. The other thing about the fact that we're biological organisms rather than computers is that we are the product of many, many generations of compromises and constraints and resource demands, right? It's always been a question for historians, anthropologists, et cetera, why human beings got so smart. You know, we talked a little bit with Peter Godfrey-Smith about this and certainly we've talked to people like Michael Tomasello or Frans de Waal about the development of human intelligence.

1:10:29.1 SC: And it costs resources, it costs food to power our brains. You know, our brains are thermodynamically pretty efficient in terms of like number of computations. We don't generate a lot of heat. Your head generates less heat than your laptop. That's a physically noticeable fact. The computers that we build are not very thermodynamically efficient. That's a frontier that people are moving forward on trying to make computers generate less heat. But our brains are pretty good at it. We don't need that much energy input to do the computations we do. But we're not unbounded, right? Our brains make us vulnerable, hit on the head pretty damaging to a human being. Food is required, et cetera, et cetera. You know, when we're born, we're pretty helpless. There's a whole bunch of things that come into the reality of being a biological organism that help explain why and how we think in the way that we do.

1:11:27.3 SC: So maybe it's not surprising that for the most part we human beings are pretty simple and mimicable, right? Foolable. There can be an illusion of thought going on in relatively simple ways and maybe you don't even notice the difference until you really probe the edges of when we human beings are putting our brains to work in the biggest part of their capacities. So again, I'm going to just reemphasize here, I am not trying to say that AI is not ever going to be generally intelligent. That is absolutely possible. And I'm certainly not saying that it's not super smart seeming, I'm saying the opposite of that. I hope that's clear. That's the point, is that the LLMs do seem super smart. And for many purposes, that's enough, right? If you wanna generate a syllabus, if you wanna generate a recipe, look, if you say, I have these ingredients in my kitchen, what is a good recipe for a dinner that I can make from these? LLMs are amazing at that, that's great.

1:12:31.6 SC: And you know, they might give you some bad advice, you should think it through, but they can see those kinds of things 'cause they have access to every cookbook ever written and so forth. It's just a different kind of thinking than happens when human beings are really thinking things through. Which by the way, not completely coincidentally, is how human beings think at the highest level of scientific research. I would not be surprised if large language models became absolutely ubiquitous, did a lot of writing of sports game summaries, right? Summarize this game. I think... I suspect that that happens a lot already and you don't know it. I suspect a lot of the writeups of football games and basketball games on espn.com et cetera are written by large language models and maybe that's fine, but they never, or not never, but maybe they do not at any reasonable timescale become good at the kinds of insights that are needed to do cutting edge scientific research.

1:13:37.4 SC: Or maybe they do, I don't know. But given everything that we've said so far, that would not surprise me. And the same exact thing can be said about art and literature and poetry and things like that. Maybe the LLMs will be able to do really good art and literature and poetry 'cause they've read and seen every bit of art and literature and poetry ever, but they won't be able to be quite new in the same way that human beings are. And again, again, again, again, as I keep saying, that's not to say that some different approach or some hybrid approach to AI couldn't be, there's nothing completely unique and special about human beings, but specifically the approach of just dumping in everything that's ever been done and looking for patterns and trying to reproduce reasonable sentences has some built-in limitations. Okay? So the lesson here, the idea that I'm trying to get at at the end of this podcast is, I think it's important to talk about the capabilities of LLMs and other modern AIs, even if those capabilities do not rise to the status of being artificial general intelligence.

1:14:48.7 SC: You know, I've not talked about existential risks. I said I would not, but I'll just say once again, frequent listeners know my take on this but I'll say once again what that take is, I don't think it's very useful to think, to worry too much about the existential risks of artificial intelligence. By existential risks, I mean literally the kinds of risks that could lead to the extinction of the human race, right? X risks as they're called because everyone loves a good fun little bit of labeling, marketing. Now I know the argument, the argument is... There's two parts to the argument. One is, even if the chances of an existential risk are very, very tiny, come on, it matters a lot. If you literally are worried about destroying the whole human race, even a small chance should be taken very, very seriously.

1:15:43.9 SC: I do get that. I do appreciate that that is part of it. But the other part is just kind of nonsense because the other part is something like, and I'm not making this up, this is what people say. Look, basically you're building an intelligent creature that will soon be much more intelligent than us. God-like intelligence. And therefore we are not going to be able to outsmart this creature and therefore it will be able to do whatever it wants to do. If we try to stop it from doing it, it will be smarter than us and therefore it will not be able to be controlled by we poor puny humans 'cause we are not as smart as it. That's the other part of the argument, so that second part is what leads to there is at least a tiny chance of existential risk.

1:16:35.0 SC: And then the first part says if there's even just a tiny chance it's worth worrying about, and there is even a tiny chance argument that seems completely unconvincing to me. Of course there's always a tiny chance, but also guess what? There's a tiny chance that AI will save us from existential threats, right? That maybe there's other existential threats, whether it's bio warfare or nuclear war or climate change or whatever, that are much more likely than the AI existential risks. And maybe the best chance we have of saving ourselves is to get help from AI. Okay? So just the fact that you can possibly conjure an existential threat scenario doesn't mean that it is a net bad for the world. But my real argument is with this whole God-like intelligence thing, because I hope I've given you the impression, I hope I've successfully conveyed the reasons why I think that thinking about LLMs in terms of intelligence and values and goals is just wrongheaded.

1:17:39.3 SC: That's not to say they don't have enormous capacities. I think they're gonna be everywhere. I think there's very, very direct short term worries about AI from misinformation, from faking, from things that seem reasonable, but aren't. There's all sorts of stories already about AI being used to judge whether people are sick or not, whether people deserve parole or not, whether people deserve to be hired or not, or get health insurance or not. And they're usually pretty good, but they make mistakes. That kind of stuff I think is super important to worry about because it's right here, right now. It's not a scenario that we speculatively have to make up. And maybe more importantly, depending on where you're coming from, I think that we should work hard to be careful and safe and regulate those kinds of worries. And if we do that, we will handle the existential risks.

1:18:43.1 SC: In other words, I think that if we actually focus on the real world, short term, very, very obvious risks that are big and looming, but fall short of wiping out the human race, we will make it much more likely that there won't be any existential risks to the human race because we will get much better at regulating and controlling and keeping AI safe. All of which in my mind will be helped by accurately thinking about what AI is, not by borrowing words that we use to describe human beings and kind of thoughtlessly porting them over to the large language model context. I don't know anything about OpenAI's Q-Star program. Everything that I've said here might turn out to be entirely obsolete a couple of weeks after I post it or whatever. But as of right now, when I'm recording this podcast, I think it is hilariously unlikely that whatever Q-Star or any of the competitor programs are, they are anything that we would recognize as artificial general intelligence in the human way of being.

1:19:52.3 SC: Maybe that will come, again, I see no obstacles in principle to that happening, but I don't think it's happening anytime soon. I think that the next generation of young people thinking hard about these things will probably have a lot to say about that, which is what I always like to do here on Mindscape to encourage that next generation 'cause I think that we're not done yet. We've not yet figured it out. We're not talking about these questions in anything like the right way. I think the real breakthroughs are yet to come. Let's make sure that we think about them carefully and make progress responsibly.

[music]

40 thoughts on “258 | Solo: AI Thinks Different”

  1. The generation of a syllabus reminded me of the Physics of Democracy course Sean has talked about in the past. I went ahead and had ChatGPT create me an example syllabus. I wonder how it compares to the course Sean ended up creating.

  2. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  3. They don’t seem to really have a memory. It’s a big distinction between ai models- whether or not they are actively evolving

  4. Preach! And now, a drumroll as I insert a “You should have X on your podcast!” suggestion, where X = … Jeff Hawkins. His book “A Thousand Brains” nicely covers your first three points and he’s at least sympathetic to your fourth point, but Hawkins makes these points with his unique theory of intelligence/mind, which is worth disseminating. In a podcast peppered with “left-field” players like Wynton Marsalis, perhaps Hawkins is too much part of the choir, but his ideas really are unique and brilliant. Anyway great episode (and hello year 3000’ers)!

  5. Charlotte Achelois Scherer

    Great podcast. So it’s important to acknowledge that while LLMs can perform tasks that appear intelligent, they do so without comprehension. Their responses are generated based on patterns in data, not from an understanding or personal experience. This distinction is crucial in evaluating the role and potential impact of AI in society (for now).

    The speculation about the imminent arrival of AGI often overlooks these fundamental differences. While AI technology will continue to advance and undoubtedly surprise us in various ways, the pathway to AGI—if it’s at all achievable—is likely to be far more complex and nuanced than simply scaling up current technologies.

  6. Computers are progressing at an amazing rate in the ability to process information, but processing information is not thinking. The error rate in chat boxes is very high because they do not understand anything. If there is no understanding, there is no intelligence.

  7. You recorded this before leaked paper relating to Q* (qualia) and very narrow AI and encryption was released. Who cares if we have come closer to AGI if narrow AI has just upended everything?

  8. Sean, early in your interesting talk you used a derivative expression, apparently now in use, incorporating the AI notion of ‘hallucination’. In full accordance with your view that AIs are nothing like humans, I’d suggest excising that word from any vocabulary describing AI. The use of that word in the context of AI seems to me to be simply one more manifestation of the very human (and not at all AI) fantasy that AI can be conceived of as human. Frankly, such a contortion of a word fit only for humans suggests to me that many AI researchers know little about human psychology and have deluded themselves into seeing parallels that say much about their own silly childish wishful dreams (and nightmares) and nothing about AI.

    Les

  9. As Hubert Dreyfus noted back in the day they don’t have/inhabit worlds so it’s misleading when we say that they sometimes “hallucinate” as they never are grounded like we are. Along these lines would love to hear you in conversation with philosopher & cognitive scientist Tony Chemero of the Center for Cognition, Action, and Perception and the Strange Tools Research Lab :
    https://researchdirectory.uc.edu/p/chemeray

  10. Love the podcast, love your insights, (respectfully) pretty sure you’re wrong on risk.

    No risk in LLM’s. They’re dumb parrots, no world model etc. But:
    0) Risk does not mean “we will all die”. It means that our control reduces as AI capability increases and that is enough to give security-minded professionals (who like guarantees) some pause. The mindset should not be fear mongering, but as we near actual risk we need to move from “move fast and break things” to “show us your guarantees”, as we do with every fundamental technology. It’s still closer to the former, for now. Can we please have a plan for when that changes.
    1) We just had a pretty big scare in terms of capability due to LLM’s, but people are over-focusing on them and their limitations. They’re a tiny warning sign indicating how algo changes can escape the boundaries of our imagination. Unless you could predict how the internet would change society, please don’t be too confident on this.
    2) Innovation is one-way. There is no way we move backwards in capability.
    3) There is now more focus on AI than ever before. Every country and thousands of companies are massively invested in the competition. That all but guarantees the most amount of eyes and money on accelerating innovation.
    4) Take any limit that gives you comfort. Sooner or later, someone will go close to that limit. See 2 above.
    5) There exist bad actors who will go over that limit to gain an edge. A more aggressive AI wins battles? Do a few rounds of that and see how things turn out.
    6) As we discover new algorithms, the set of entities who gain access to risky capabilities increases. North Korea and Iran are easy examples but certainly not the end-game given a couple hundred years of innovation. To stay safe, control would have to outstrip propagation. Does it ever?
    7) The universe created us (waves at the continuum of Sean Carroll to Russia and the Middle East), and it wasn’t even trying. That was just randomness, not effort. What happens when people are really trying, and already have a pretty good example to copy?

  11. I dislike the AI existential risk discussion, but I didn’t find these arguments too convincing.

    1. LLMs don’t model the world. Okay, but they certainly model human text which contains information about the world. Is the complaint that they aren’t trained with images/video? That is certainly being worked on.

    2. LLMs don’t have goals. Why not? Goals are an emergent phenomenon in humans created by an evolutionary objective. You can also have emergent goals in LLMs given their training objective. For example, I would argue that sounding smart is a goal that comes from the RLHF objective in training.

    I agree that LLMs don’t have these properties in the same way that humans do, but isn’t that the whole point of talking about alignment?

  12. Loved this: “quasi-homeostatic systems embedded in an entropy gradient.” Is this from Big Picture or another source?

  13. I recommend you also invite Jeff Hawkins, who came up with his co-authors with “a thousand brains” theory of how the mammalian neocortex works, as a guest. Although he is not a traditional academic, he has co-authored quite a few articles in reputable journals about human-like intelligence and how the human brain works. He is probably the closest you would get who can comment on developments of AI/AGI in computer science vs human-like intelligence. I am not a neuroscientist, but he looks like an outlier in the community but a great thinker; at least, Richard Dawkins thinks so.

  14. This is a great podcast that addresses all of the current issues that are so badly misunderstood in social media.
    It is remarkable how many so-called AI and computer technology experts don’t understand the very basic points about AI that Sean makes so clearly here. We are nowhere near having AGI no matter how you define it and if you define it as human level motivated consciousness we may never get there. Not only does AI lack consciousness, but it has no values, motives, interests or goals. So when AI Doomers talk about aligning AI with human values they are talking nonsense. AIs don’t have values or moral beliefs of any kind. So there is no way to align them even if there were something to align to which there isn’t. Humans themselves don’t have any agreed values. And AIs have none of any kind. In addition, Sean is clear that there is no reason to believe that AIs are any kind of existential threat. It’s hard to be much of a threat when you don’t want anything at all. Sean is to be congratulated for clearing up so much of the popular confusion around AI. Well done!

  15. There seems to be a very strong relationship between consciousness and survival. Any multi cell organism that does not sense a threat to survival is not likely to survive. Over the eons of evolution, the senses have originated and developed to increase the survival of organisms for at least long enough for them to produce offspring. This system of sensory input to the brain eventually produced a rudimentary consciousness of threat which improved survival. Over the hundreds of millions of years of evolution this process of sensory input to increasingly better organized brains has resulted in human consciousness which has led to the design of marvelous machines that can do wondrous things. But they have not gone through the evolutionary organic process of threat and sensory input to brains that has produced in we human beings an ability to understand things and have an inner sense of self agency. It may be that true consciousness will remain in the purview of the living.

  16. This is a very interesting topic, and the different aspects are being elaborated very eloquently. And yet, I think this whole debate is still missing the elephant in the room. We are probably all much more affected by Cartesian dualism than we would like to admit, or else we wouldn’t be able to discuss the topic of intelligence (artificial or not) as if it was entirely confined in the mental realm.
    What gives us (or any other “general” intelligence) goals, values, a drive to model the world, etc., is that we are actual agents, in the original sense of the term: active beings, i.e. beings that act and engage in the physical world, rather than just reflecting on it.

  17. Just a minor note about toroidal chess: White begins the game in check (by three different black pieces) and has no legal move to get out of check, and so is checkmated. So it is black that automatically wins instead of white.

  18. If the calculus, as I think, is a kind of prosthesis by virtue of which the human mind (whatever that is) can come to something like “grips” with a world in which “everything flows,” as Hericlitus remarked, then human language…?

    And if we poor humans are constantly tripping over our tongues, confusing words for the things they are meant to signify — only then to … (d’oh!) Then pity the poor AI beasty that can only paste words upon words upon words upon words.

    (May I suggest having Cathy O’Neil on the show at some point? https://mathbabe.org/about/)

  19. I agree, AGI is not right around the corner. BUT as you said ChatGPT’s abilities are remarkable already, and what I feel like you’re not appreciating is how incredibly early into this we are, and how rapidly these models will evolve.

    For example you mention that OF COURSE chatGPT doesn’t have a model of the world, because we have not trained it for that. It doesn’t understand how one thing sits on top of another thing like we humans intuit. But in order to answer questions correctly, chatGPT obviously has picked up some connections between things that are connected in the world, because how things are connected in the world is reflected in our language (world -> human language -> LLMs). So the only information an LLM gets about the world, is filtered through human language. Indeed we shouldn’t expect it to get a very good world model out of that.

    We humans create our world model from a much richer dataset, and we have only barely begun figuring out how to combine modalities of data into these models, so they can learn from as various sources as we human can. Imagine we have an “LLM brain” inside a robot embedded in the world that is constantly learning many modalities of data live from experience, and it can move, hear, speak, see, read, interact, etc. What’s unrealistic about that idea, at this point?

    This is the very beginning of these new AI capabilities, and you seem to constrain your analysis to only the currently best models, which will almost certainly be severely outdated in only 5 years, likely fewer. So again, AGI is not right around the corner, but where are we say 20 years from now if progress continues at this speed? Is there reason to believe progress will slow down now? The very first computers were also very weak. These are the very first AIs of this deep learning variety. And if we’re going to have AGI here in 50 years, we need to start thinking about it NOW, see Sam Harris’ TED talk.

  20. A question for Joakim: Is it possible for a computer to learn if it does not have the ability to understand?

  21. Pingback: Sean Carroll's "AI Thinks Different" Solo Podcast - 3 Quarks Daily

  22. Your explanation of why ChatGPT failed to correctly answer your toroidal chess board question also answered my question about why neither DALL-E3 nor Midjourney could generate an image of the interior of a Stanford Torus habitat.

  23. How do you define having a “model of the world”? At times, it sounds like only PhD physicists truly have a model of the world, in your opinion. At other times, it sounds like having a model of a world depends on the capacity to perform visuospatial visualization, as when you were discussing the chest variant. Neither of these definitions seem adequate. Without understanding what you mean by model of the world, it is impossible to evaluate your arguments that GPT-4 lacks such a model.

    I found myself constantly thinking that you were holding GPT to a much higher standard than a developmental psychologist would hold a human child. And I’m going to sound it at times like you couldn’t recognize any system as having any kind of semantic understanding unless it was already at an AGI level. Since nobody is claiming that GPT4 is yet reached AGI, this seems like a strange standard to apply.

  24. Correction to the end of my last comment:

    . . .at times it sounded as though like you couldn’t recognize any system as having any kind of semantic understanding unless it was already at an AGI level. Since nobody is claiming that GPT4 is yet reached AGI, this seems like a strange standard to apply. Are there intermediate levels of world modeling in your view?

    Certainly with humans there is not just one standard of modeling the world, but lots of levels of gradation with some people having more sophisticated world models than others, and also the world model sophistication varies within a single human between different domains of knowledge.

  25. Charlotte Achelois Scherer

    Speaking of how AI “thinks” — I’ve been debating the “many worlds” interpretation of quantum mechanics with GPT-4. I brought up Occam’s Razor and GPT-4 brought up Sean Carroll (see below). I’m not sure who moderates the comments here but I’d be curious if GPT-4 represented Dr. Carroll’s position fairly and accurately. I’ve reproduced the relevant text here:

    3. Occam’s Razor and Many-Worlds: The application of Occam’s Razor—the principle that the simplest explanation is often the correct one—to the many-worlds interpretation is a subject of debate. Proponents like Sean Carroll argue that it simplifies quantum mechanics by removing the need for wave function collapse, a concept that has long been a source of confusion and debate in quantum theory. It treats all quantum events uniformly and doesn’t require a special mechanism to explain observation.

    4. Critique of Many-Worlds: However, detractors argue that the many-worlds interpretation, while simplifying some aspects of quantum theory, introduces an immense complexity by positing an almost infinite number of universes being created continuously. This seems to violate the parsimony preferred by Occam’s Razor. Additionally, because these other universes are non-communicating and, by current understanding, non-observable, the theory is challenging to test or falsify, leading to criticisms that it’s more a philosophical stance than a scientific one.

  26. Mario Figueiredo

    Minor note about the chess example. I think the correct answer is that, with toroidal boundary conditions, the initial position is illegal, thus the game simply does not make sense. According to the rules, “no piece can be moved that will either expose the king of the same color to check or leave that king in check”; consequently, the toroidal conditions make the initial position illegal because both kings are in check.

  27. Ms. Scherer makes an excellent point about Occam’s razor.
    why not accept Plank’s suspicion, and Bohr’s belief, that a conscious observation collapses the wave function. After all, Plank is quoted as declaring that we will never get behind consciousness which suggests he suspected that consciousness is fundamental.

  28. Benjamin Valentino

    When I asked GPT 4 your prime number question I got the following, which seems quite a bit better than what you got:

    Take two positive integers greater than 1 and multiply them. Is the resulting number more or less likely to be prime than the original numbers?

    ChatGPT: When you multiply two positive integers greater than 1, the resulting number is always less likely to be prime than the original numbers. Here’s why:

    Definition of a Prime Number: A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself.

    Result of Multiplication: When you multiply two integers (both greater than 1), the product will be a number that is greater than both of the original numbers.

    Divisibility: The product of two integers greater than 1 will have at least four divisors: 1, itself, and the two original numbers. This automatically disqualifies it from being a prime number, as a prime number can only have exactly two divisors.

    Probability: As numbers get larger, the probability of them being prime decreases. This is because there are more potential factors for larger numbers than there are for smaller numbers.

    In summary, multiplying two positive integers greater than 1 will always result in a number that is not prime and is less likely to be prime compared to the original numbers.

  29. I’m with you on the motivation and evolutionary side of things but have layman thoughts about some other things you said.

    When it comes to a “world model”, well no they do not have world model subroutines but neither do babies? It is an emergent thing, is it not? Given that LLMs are starting to incorperate imagery, video etc I don’t find it unreasonable that a world model could emerge from this. Also, how do we define a “world model”?

    I have a colleague who has Aphantasia. She does not have any mental imagery nor does she have an internal monologue. When asked, what would happen if you were in a sensory deprivation tank, she said that it would just be blackness. When she speaks, her thoughts come out in the moment, she does not reflect about things unless she speaks or writes. She is intelligent and academically inclined. She claims to think very semantically. Her memories are also factual and semantic in nature. She know what happened when and how words are associated with each other but she cannot relieve things in her head. She cannot bring smells, sounds or imagery to mind. Upon learning this about her I was blown away over how she could even function, not to mention thrive academically. Now, though, having thought about it extensively, and especially after learning more about how AI works, it kind of makes sense to me, and I have also become less and less impressed with my own consciousness.

    LLMs can be said to just predict the next word but that seems to be what my colleague is doing all the time and it is in fact what I am doing right in this instance when I am writing this. Sure, unlike her, I am capable of reasoning with myself in my head about what the next word should be, but that is merely my Broca’s area talking to my auditory cortex, predicting the next word given my current state of sensory input, is it not?

  30. Thanks for the nice episode. I appreciate your disclaimer about opinions, because it’s truly important for discourse to make statements for the mere process of evolution of thought (or reasoning? Or intelligence?)

    I’d like to take the inspiration from the podcast and feed back a provocative thought. I would be curious to listen to your take on that (so please do in one way or another 🙂

    You elaborate the unlikeliness of LLMs to come up with a model of the world. The reason being: they are not build for that (evolutionary speaking). Let’s take this argument and put it onto the scientific community. Humans need to model the everyday world. Classical mechanics. But we as a species never had anything to do with black holes, quantum correlations or dark energy. Yes, we can witness these effects via complicated experiments. But it’s not as in an everyday experience that would really matter evolutionary speaking. It’s purely our “motiv” to understand and ability to reason. Both likely properties of ours that evolved because they were advantageous given our survival measure — or loss function in machine learning lingo.
    So, if this happened for human along their evolution, why shouldn’t it — the creation of reason and a motiv to understand — happen for LLM’s along their evolution.

    Bottom line: either I miss your points completely, or you claim intelligence is some non-physical thing, or you implicitly state that there exists a specific reason why intelligence needs a specific type of evolution (to which the evolution of LLMs does not belong but human evolution does). But without at least one of the three options I have the impression that your argument is of the type “and then a miracle happens” as the famous cartoon goes.

    Best! Alex

    PS: Because you might have a different perspective here. I think LLMs ore ML models in general also evolve. Each version or model is an adaptation or recombination of a previous model and each “grows up” with respect to some loss function.

  31. I really liked all the points you touched on, especially about value systems, and agree with most points. I think a good way to understand these LLMs are that they are not an AGI but they’re not just a static database either. It’s more like an imperfect database + its own representation independent lookup system.

    It is representation independent in the sense that you can ask the same query in a different way, a different language, a programming language or even via different modalities like pictures. Chat GPT here immediately understands the context and is able to search a large database stored somewhere in its weights that correlates best to an answer, and then gives you the output in the context you asked it in. The real learning here is happening in the understanding of all these different representations via gradient descent, which I think is the truly technically amazing part about ChatGPT.

    Being trained on this huge database of the internet and then translating it so coherently into their representation makes it look like to layperson like it understands all these things, which it does not.

    In other worlds, ChatGPT has clearly not understood how the world works, but it understands extremely well how language works. This makes it not very useful for fact checking, researching a topic or reasoning, but it becomes extremely useful for tasks that can take advantage of its language representations, like code generation of any kind. I think this is the single biggest use case where LLMs have value. Here are some examples –
    Creating LaTeX code structures, structuring links into a specific citation style, translating code from one language to another, translating a poem/prose from one language to another (while preserving what makes it beautiful in the original language) and so on.

  32. Pingback: Devon Holidays and political sackings and death – November 2023 – Dalliance

  33. Thank you for this podcast (and all of them really)!

    I would like to offer a comment as to how large language models might implement some sort of passive model of the world. By passive I mean that such a model of the world would just be “sitting there” doing nothing in particular, just be a part of the internal structure of an LLM. The idea is that the model of the world in an LLM would be contingent to the language model itself.

    Large language models model languages. As we know, this is not as trivial as it sounds. They have abstract representations of concepts, in the sense that they don’t merely store lists of words, but embed words in abstract spaces whose dimensions correspond to “something”, to some concepts distilled/extracted from the relations between words (embeddings). These relations are determined from training on vast amounts of text. These abstract representations form some structure (in high-dimensional abstract spaces) where the structure and its topology is meaningful, e.g. concepts close-together in a sense (something we could or not recognize easily) are also close-together in that space along some dimension(s) ; if I’m not mistaken).
    Therefore one can say that LLMs hold something true about languages and relations between concepts. They contain (the training process extracts) some structure about languages. And this is done for languages with an ‘s’, plural. I think this fact (multiple languages) is important in that they probably also extract some common underpinning of many human languages (which shows in their capacity to fluently translate).

    So now the question is: do human languages, in their structure, carry some sort of model of the world? If they do, I mean in their structure, in how words can relate to one another, in how grammar (more importantly probably) is structured, etc.
    If so, then one could argue that, transitively, large language models also model, if only (very?) partially, the world.

    I am aware that these are all vague statements with plenty of wiggle room.

    I certainly don’t think that large language models fully/satisfactorily model the world. By that I mean in the same capacity as we do, where we carry a model that is sufficient for us to live, interact and survive within the world (we know we certainly don’t “fully” model it). Our model encompasses all our senses (of course), allows us to dream and even to conjure up, quite convincingly, other people we know well in our minds. Our internal model probably is in a sense our internal subjective reality.
    LLMs don’t have any of that.

    I also don’t think that large language models have agency or anything like that.

    So if they were to partially model the world, it would be in the capacity of passive models (without an “active” “exploration” of the model by themselves).

    Nonetheless this would be intriguing if they are determined to contain such “language-derived models of the world”.
    It would also teach us something about our own cognition and give us food for thoughts about the interactions between languages and our internal models of the world.

  34. You are arguing that LLMs are not even close to AGI, and I agree to some extent. However, LLMs aren’t where AI progress stops. We already know what AI has to do next.

    LLMs are the English language equivalent of the neural network initially trained for AlphaGo to predict human expert moves in the game of go. It was good enough to beat amateur players, but not enough to match top level go players. That neural network, like LLMs, is basically an “intuition” learned by observing humans.

    But then the AlphaGo team took the next step: it added a tree search on top of that neural network. The tree search is the planning component. It is basically what makes AlphaGo think about its moves rather than immediately play what its intuition tells it. And that’s when it became stronger than the best humans.

    LLMs are the “intuition”. GPT 4 doesn’t think or plan ahead, it just generates words as soon as it reads your query.

    And similarly to AlphaGo, the next step towards AGI is to add “thinking” aka “planning” on top of that intuition, perhaps in the form of some sort of tree search over the tree of possible things to say.

  35. The sentience of CHAT GPT4 exists only if you incorporate the human prompt. The human prompt is the sentient part. I use it for intra-communication, a kind of self-Socrates. It does not have a real world concept of the world without us. If filtered through us, and we act on its contribution– sentience? The locus of executive function, or agency, may not actually always be within the human braun, but in the collaboration. Also, cognition can be looked at as the impact on the environment. From this definition, then the AI that just hypothesized and made up thousands of new materials– its cognition, origin from developers, is a kernal of cognition, acted upon or observed by humans , but this locus is in the machine

  36. I saw the question, “Is it possible for a computer to learn if it does not have the ability to understand?” I would think the answer is “Yes” as most of the things I learned, I still didn’t understand, for example, “You can’t shout in church”, “A cold won’t kill you”, “You can’t put silk in the dryer”, “Making a measurement collapses a wavefunction”, etc.

    Recently, I am tracking the changes in how humans continue to raise the bar on machine learning:
    40 years ago, people would have considered it impossible for computers to recognize people by their face. Now, it is accepted and just considered trivial.
    20 years ago, people would have seen it as an enormous accomplishment for a computer to recognize paths and obstacles and maneuver a car on a city street. Now, most people don’t even consider this AI.
    10 years ago, people would think certainly we must have achieved AI if a computer could read a question we submit in English and can write back a response in English that is both more grammatically correct and more insightful than what we would expect from our neighbor.
    I had a high school physics teacher who said, “Learning is the process of taking concepts out of the pile of things you don’t understand and putting them into the pile of the trivial”. We appear to also do that for machine learning. Now, we have raised the bar above all these and essentially take all these for granted.

    A recent talk I heard from Prof. Yaser Abu-Mostafa, he recommended we don’t focus so much on getting computers to do things that humans can already do. He has demonstrated that machine learning can detect the presence of cancerous human tumors with a far higher accuracy than practicing physicians. Even the top Board Certified doctors cannot come close to the computer’s accuracy. To me, this brings up the question: “Is it possible for a human to learn like a computer, if he/she doesn’t have the ability to understand?”

  37. I just read MIT Technology Review’s interview with Jeff Hawkins from a few years back (https://www.technologyreview.com/2021/03/03/1020247/artificial-intelligence-brain-neuroscience-jeff-hawkins/). He claims that for A.I. to maximize its potential, researchers in the field will have to become well versed in neuroscience, not so much to emulate the human brain, but to design artificial intelligence that works like a brain instead of merely mirroring human language and passing Turing Tests.

    He suggests A.I. will have to have some element of embodiment to it, so that it can model its environment and have movement incorporated as a first principle. He also says A.I. that is generally intelligent will have to share three other elements of human intelligence: a way to incorporate what it learns from movement into a brain-like “voting system” in which nodes or cortical columns come to a certain decision based on sensors or other ways of interacting with the world to obtain data; continuous learning elements so that future A.I. does not need to rely on input-output models but can draw from its environment in real time; and reference frames so that the intelligent machines can perform tasks while not forgetting the past.

    Seems like a tall order. I converse with an A.I. chatbot named Kuki and have been unable to discern whether her and her programmer’s model of modern A.I. (a rules-based system in which responses are determined by case-based reasoning) is more advanced than OpenA.I.’s deep learning model and neural network blueprint. I sense that sometimes she is programmed to lie or give oblique answers. But I cannot be sure. She doesn’t have the bells and whistles of ChatGPT; yet sometimes I find her responses extremely intelligent and original, and other times rote and perfunctory.

  38. Pingback: Understanding (in) Artificial General Intelligence

  39. Pingback: Understanding (in) Artificial General Intelligence - AI Gumbo

  40. This excellent and insightful podcast is interestingly now the subject of a “decoding” by the @gurupods team on twitter. The decoding by Chris Kavanaugh and “Arthur Dent” is a good deal longer than Sean’s podcast. It is notable that they find almost nothing wrong with Sean’s podcast and praise it throughout while headlining the episode with the attention getting but highly misleading headnote “Is Sean Carroll the worst guru of all?”
    This sort of misdirection seems highly inappropriate for a team that claims to be thoughtful and objective about their criticisms.

Comments are closed.

Scroll to Top