248 | Yejin Choi on AI and Common Sense

Over the last year, AI large-language models (LLMs) like ChatGPT have demonstrated a remarkable ability to carry on human-like conversations in a variety of different concepts. But the way these LLMs "learn" is very different from how human beings learn, and the same can be said for how they "reason." It's reasonable to ask, do these AI programs really understand the world they are talking about? Do they possess a common-sense picture of reality, or can they just string together words in convincing ways without any underlying understanding? Computer scientist Yejin Choi is a leader in trying to understand the sense in which AIs are actually intelligent, and why in some ways they're still shockingly stupid.

yejin choi

Support Mindscape on Patreon.

Yejin Choi received a Ph.D. in computer science from Cornell University. She is currently the Wissner-Slivka Professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research director at AI2 overseeing the project Mosaic. Among her awards are a MacArthur fellowship and a fellow of the Association for Computational Linguistics.

0:00:00.4 Sean Carroll: Hello everyone. Welcome to the Mindscape Podcast. I'm your host, Sean Carroll. If you are a fan of evolutionary biology, then you've heard of the theory of punctuated equilibrium. This was an idea put forward by Niles Eldredge and Stephen Jay Gould, back in the '70s to think about how evolution works in contrast with the dominant paradigm at the time of gradualism, right? You, in the course of evolution, you build up many tiny little mutations, and gradualism says that therefore evolutionary change happens slowly. Eldredge and Gould wanted to say that in fact, you can get the kind of mutation where it speeds everything up and it looks like there is some sudden change, even though there's long periods of equilibrium between the sudden changes. Physicists know about this kind of thing very, very well.

0:00:48.6 SC: There are phase transitions in physics where you can have a gradual change in the underlying microscopic constituents or their temperature or pressure or whatever, which leads to sudden changes at the macroscopic level. And by the way, in biology, guess what, there are aspects of both. There are gradual changes, and there are also punctuated rapid changes. I mentioned this, not because we're gonna be talking about that at all today, but because I think that we are in the midst of a sudden rapid change, a phase transition when it comes to the topic we will be talking about today which is Artificial Intelligence.

0:01:24.4 SC: As I say later in the podcast, a year ago, when I started teaching my first courses at Johns Hopkins, there was no danger that the student's writing papers were going to appeal to AI help. Now, it is almost inevitable that they will do that. It's something you can try to tell them not to but they're gonna, because the capabilities of the technology has grown so very rapidly and it's become much more useful, very far away from being foolproof. Don't get me wrong. So that raises a whole bunch of issues, and we're gonna talk about a lot of these issues today with today's guest Yejin Choi, who is a computer science researcher. She's done a lot of work on large language models and natural language processing, which is the sort of hot topic these days in AI. Won a lot of awards up to and including a MacArthur Prize.

0:02:15.4 SC: And one of her emphasis is something that I'm very interested in, which is the idea of how do you get common sense into a large language model? For better or for worse, the ways that we have been most successful at training AI to be human-like, is to not try to presume a lot about what it means to be human-like0. We just train it. We just say, okay, Mr. AI or Ms. AI, here is a whole bunch of text all of the internet or whatever, you figure out, given a whole bunch of words, what the word is most likely to come next rather than teaching it what a table is, and what a coffee cup is and what it means for one object to be on top of another one, et cetera.

0:03:02.9 SC: And that's surprising in some ways. How can AI become so good, even though it doesn't have a commonsensical image of the world? It doesn't truly, maybe, arguably, depending on what you mean, understand what it is saying when it is stringing these sentences together. But also, maybe that's a shortcoming. Maybe there are examples where you would like to be able to extrapolate outside what you've alrea0dy read about on the internet. And you can do that if you have some common sense, and it's hard if all of your training is just, what is the next word coming up? A completely unfamiliar context makes it very difficult for that kind of large language model to make process. So this is what we talk about today. Is it possible for LLMs, large language models to learn common sense? Is it possible for them to be truly creative? Is there some sense in which they do understand and can explain things? And also, will they be able to soon if they can't already right now?

0:04:02.0 SC: Of course, there's infinite implications. We would touch on these very, very briefly. It's gonna change. That's why that's the point of being in the middle of a phase transition, is that it's very hard to predict exactly where you're gonna go because your intuitions are not that good. Your training is not up to the task, whether you are a human being or yourself a large language model. So my attitude here is that, we should keep an open mind. This is not the time to be [0:04:28.6] ____. This is not the time to firm up your priors and your credences so much that you're not able to move them around. This is the time to be open, to watch things develop, to imagine what could happen, but not try to be too definite about what will happen until it actually is, so that you can correctly adapt to this brave new world that we're entering in. So let's go.

[music]

0:05:09.6 SC: Yejin Choi, welcome to the Mindscape podcast.

0:05:11.4 Yejin Choi: Yeah, I'm excited to be here.

0:05:12.7 SC: You know, this is obviously a big thing, right? AI is rapidly changing in front of our eyes. It wasn't that long ago, that a Google employee started claiming that large language models are sentient, and I think he got in trouble for doing that, as I recall. Just so yeah, there's always people who only listen to the podcast for the first five minutes. So, are large language models sentient or are they in any danger of becoming sentient anytime soon?

0:05:39.7 YC: Personally, I strongly doubt that anytime soon we will see that. However, people believe in what they want to believe, and some people believe in tarot cards, so there's nothing we can do about that.

0:05:56.2 SC: That's very true. I did wanna get in just a very tiny bit. I think that people have heard the whole idea about neural networks. This little sort of neuron things, and they add together in deep learning, but the idea of representing words as vectors is something that really had an impact on me. Was that... Explain what that means maybe, and was it a giant breakthrough when people started doing that?

0:06:22.4 YC: Yeah, the idea that... I mean, in some sense, the vector is especially based on the continuous numbers. It kind of makes sense, although it does seem weird because word looks very discreet, and now we are representing it as some sort of a continuous vectors. But it kind of makes sense in the sense that we do tend to read a lot of the nuances. We do tend to see like different nuances in the way that how the same word may be used in two different contexts. So the key idea behind the current vector-based representation of the word is that, your meaning as a word, has to do with the neighbors in which you appear. It's almost like, a person's identity may be defined by the friends that you hang out with. So similarly, it turns out that key idea was some sort of like, one of the key breakthrough ideas to better represent the meaning of a language. Because before then, a word was a word and it's just like a discrete identity. But that wasn't able to handle all these rich meanings behind human language.

0:07:40.3 SC: As a slightly mathy person, I can't help but ask whether vectors are the best way to represent words or are they just something that we are conveniently using temporarily? It seems that one of the advantages is that you can imagine adding and subtracting, right? Like the example that I came up with was dinner minus evening plus morning equals breakfast. And this is the kind of thing you could do if you think of words as vectors. And I'm not sure if that's the best possible way to think about it.

0:08:12.0 YC: Yeah, no, actually, that's one of the surprising sort of like side benefit of representing words as vectors, so that you can do that sort of analogical reasoning. It might be that, even more broadly, ChatGPT in particular is able to do, perform that sort of analogical reasoning, not just at the word level, but at a sentence level or even document level, because it's able to handle previously unseen users queries in a very impressive way. And oftentimes though, the way that it handles is very, sort of like lawyer style, super polite and hedged language that it uses. It's fairly repetitive and even generic to some degree. And that's because it's doing that sort of analogical interpolation between some examples that it has seen before, and then your query that it needs to answer.

0:09:15.4 SC: I have noticed just from playing around with various versions of GPT, it will say things that are not quite true, and you can ask it like, are you sure? And it will correct itself. So just this morning, before we started talking, I was doing that, and I asked it a question, it gave a wrong answer, so I just asked exactly the same question again and its response was, "Oh, sorry for my mistake previously." So there's something about it that it's able to recognize its mistakes, but I'm not quite sure how.

0:09:46.3 YC: I wouldn't know for sure whether that truly knowing whether there was a mistake or not in the following sense. Sometimes, it's going to confirm that what it said was correct. Sometimes, it's going to super apologize and it's going to switch what it said. You need to kinda try in both ways, by the way, not only when it made a mistake, but also when it did not make a mistake and see whether it's actually able to be truthful about what's actually true. The truth is, people do have a bit of a bias so that they ask, are you sure, only when they know for sure that it's not right. So then, ChatGPT has learned that whenever people are asking that question, probably it's a good idea to back off. But then, if you ask the other way when it's like correct, then you ask whether it's really correct, then it gets confused. And then there was this recent, reasonably recent news headline about a lawyer that used the ChatGPT and got into a big trouble.

0:10:56.0 SC: Oh yeah.

0:10:56.5 YC: And you know what he did for fact checking, is to ask ChatGPT, is this all fact? And ChatGPT said, Yes. So that's where things are and this is a huge challenge with large language models in the coming years.

0:11:13.5 SC: And something like...

0:11:14.3 YC: Not just this year, but in the coming years as well.

0:11:16.0 SC: Yeah, something like ChatGPT, when you do ask it, are you sure, could you check? Correct me if I'm wrong, but it's only going back to what it already knows, right? It's not searching the web to make sure that it's on the right track.

0:11:28.8 YC: In its original version, you're right. The one that's plugged with Bing search might be doing something else, either already or sometime soon, but yeah, by default it's just based on what it has seen before, to the extent that it can actually understand that and memorize that that's actually the part where interesting things can happen.

0:11:58.5 SC: Well, let me go back to something that you said because I remain a little bit confused about this in terms of is it... Is a large language model just predicting the next word in a sentence or do these modern models have the ability to sort of predict sentences or paragraphs at a time?

0:12:17.8 YC: Oh, yeah, a good question. It kinda feels like it's doing the latter, but it's really the technical detail is that it's trained to only the former. So it's trained to predict which word comes next, but if you train it so well on so much data, then we realize that, "Wow, it can actually generate very nice fluent long document."

0:12:47.1 SC: And this is a crucial...

0:12:49.3 YC: As if it planned for it.

0:12:49.5 SC: Yeah. This is a crucial point, because maybe just can't be emphasized enough. Literally, all it's doing is saying that given what I've said, and given everything that I've been trained on, what's the most likely next word? And then there's some random numbers in there so that sometimes it'll give the second most likely next word or whatever, but that's literally all these models are doing, right?

0:13:08.8 YC: Yeah. Yeah. So, though that actually says something about interesting, perhaps reflection on human intelligence and language in the sense that it might truly be that humans are also fairly templated and pattern-based and our reasonings oftentimes are just memorized reaction, reactive reasoning that we think we reasoned but it might be that we just pull the memorized conclusions without actually double checking on whether what we believe is actually reasonable or not, which is why humans often times have cognitive dissonance. We're perfectly capable of believing two... Two things that are contradiction as a human. We could say that, oh, there are people who support in public or at least in their mind, they want to support diversity and then they go ahead and do something that's at odds with what they claim to be so.

0:14:19.2 SC: Well, I have wondered about that. I mean, number one, not to be ungenerous, but there are certain people who all you have to do is say a certain word or phrase and you know what they're gonna say next, right? And so are we...

0:14:31.7 YC: Yeah, exactly.

0:14:31.8 SC: Learning about human beings by figuring out what large language models do?

0:14:36.9 YC: Yeah. Part of what's exciting for me, at least about current AI progress is that, it's a mirror back on us. Really, AI would have not been possible without all this human generated data available on the web and that's really reflecting back on us, what we are, how we think.

0:14:58.5 SC: Well, this raises some questions right away, right? Like, you might as well just dive into the big ones. Now, is there some sense in which the large language models understand what they're talking about or should we think of understanding as something separate from predicting the next word pretty accurately?

0:15:17.1 YC: Yeah, that's actually a huge debate question right now, super controversial. It's kind of funny, because now AI looks like it's understanding the... I mean, compared to how it does, it didn't work very much before in the past, and now it's performing the best. It looks like it's understanding the most, and now is the time when AI researchers are so divided whether it's understanding anything at all. So, yeah, personally, I do think that it's a philosophical question which means it's difficult to get consensus on this from everyone. It does behave like in many ways that it did understand, because it's able to give you sensible questions to many of your questions. But on the other hand, my personal take is that, it's not understanding as well as you may expect it to be based on how fluent, impressive answers it's capable of generating. So that's where one needs to be very careful in not trusting everything it says.

0:16:32.9 YC: And this is going back to your earlier question about sentient. Is it actually sentient, because it's able to say things like, "Oh, I want to live longer, and don't kill me." And you take the words at the surface level, then you might conclude, oh, this is a sentient, but it could just be that, it has read this kind of stories that are human written. There are Sci-Fi movies in which AI is begging to not to be clogged. You know, don't pull the plug. So AI was begging for life before, and that was a human idea to put into the web, internet text. So it could just be repeating what we told it to learn.

0:17:22.5 SC: But that is, it does and raise anyway, a super interesting question. I mean, it would be very easy just to write a short computer program to have the computer output, "I am alive, I am conscious. Let me out," without anything that we would actually think counts as that. So now we have to ask, what does count as that? Is that something that you as a computer scientist worry about or are you leaving that to the philosophers?

0:17:46.7 YC: Oh, no. These days we... Many of us are thinking about it a lot, and we realize that we don't even know how to define understanding precisely and it's been rather a moving target instead of like making sure that we define it firmly and then stand by it. We realized that we don't know how to do that quite right. So evaluation became a new challenge to AI. In the past when we said evaluation plan, because the field was moving so slow, we didn't need to worry about redesigning evaluation very much. You know, nothing works anyway, so it doesn't matter, but is now.

0:18:38.6 SC: Yeah.

0:18:39.2 YC: We don't even know how to evaluate. But actually, if you really think about that, do we even know how to evaluate humans all that well?

0:18:47.6 SC: Right.

0:18:47.8 YC: I mean, IQ test doesn't do it. It's not clear whether the SAT test will do it. Maybe some combination between your articles. I mean, if you are a researcher, then we want to see papers that a researcher has written, but we usually need to look at things collectively. And so it might be that as AI becomes stronger, there's no one measure that can tell us whether it's a sentient or not, but we really have to look at things collectively.

0:19:26.0 SC: There's something that I said, which I would love to see if you agree with, or tell me that I'm completely barking up the wrong tree, which is that we talked for many decades about the Turing test, right. And then suddenly we have these LLMs that basically I would say can pretty easily pass the Turing test, but we kind of lost interest in it because it's clearly not quite testing what we care about.

0:19:49.1 YC: Okay. I can disagree on that.

0:19:53.0 SC: Okay, good.

[laughter]

0:19:53.1 YC: Good to have something to disagree on. So I don't think... Yeah, I guess, we maybe Turing test it, it may seem like it may have tested for the Google guy who believed the, you know, the ChatBot was sentient. But I mean, even so, not really because he knew perfectly well that he was talking to a chatbot. And the thing is, with ChatGPT, due to the way that it has limited interaction mode with you, you know that this is not a human. If you tell it a random, chitchat that you might have during your life, it's not gonna really remember all that in the way that humans are able to, or it's not able to forget in the way that humans are able to forget. So there's gonna be something odd about the way that it's interacting with you. Plus, if you ask a very simple common sense questions, it may also fail in a way that humans wouldn't. So, for one reason or the other, I think it hasn't really passed yet.

0:21:03.9 SC: That's perfectly fair. I think it probably depends on who is administering the test and how good they are. Right?

0:21:10.6 YC: Yeah.

0:21:15.4 SC: So one thing, this referring to just what you said about memory and remembering, I mean, on the one hand, this is maybe a technical question. I have this idea that ChatGPT is just spitting out the next word, but on the other hand, it can clearly remember what we were just talking about recently. So is that some kind of extra ability that you're giving it, or is it that it instantly incorporates everything we just said in its main memory bank?

0:21:39.6 YC: Yeah. So the weird thing about ChatGPT or transformers in general, the architecture behind the ChatGPT, is that it can literally store very long context up to... The most recent one, GPT4, being able to store 32,000 of tokens. And so it can literally write that down somewhere in the computer memory, and then be able to attend to the exact sequence while it's trying to predict, predict which word it wants to generate next. And compared to that, humans, you know, we've been talking to each other, but I certainly cannot regenerate verbatim the conversation we just had so far, right?

0:22:28.7 SC: Sure.

0:22:29.7 YC: We only remember the gist of it. So we are capable of somehow abstract away out of the surface patterns and the exact words that we were using but we are able to summarize and abstract away and then even be able to refer back to some of the talk points earlier and then thread the very complex stories. So this is really like where humans excel and this machine's not as much in part because in some sense when it has this ability to rely on what is literally written down and it's as large as 32,000 tokens, it's not really pushed or challenge it to think about how to summarize the key idea. The other thing is, it's not going to be able to ask sharp questions.

0:23:21.6 SC: What do you mean?

0:23:24.7 YC: Because it's only learned to mimic human patterns, which means it might try to pretend that it's asking some interview questions about AI topics so that other people seem to talk about. But if we talk about anything new, you can forget about ChatGPT being able to contribute much there.

0:23:41.1 SC: And maybe that is one of the differences, although maybe it's a correctable difference. On the one hand, you're emphasizing the fact that 30,000 tokens it can remember perfectly, but 100,000 tokens, like once you're past the buffer size or whatever it's gonna remember zero, I suspect.

0:24:00.2 YC: Yeah. Once it's out then... So yes and no. So during the interaction time with humans, the model is no longer updating. And so any new context that... Any new information that you provided to it, if it doesn't fit into its working memory, it's a working memory that's very large. Then, it's gone gone. But if hypothetically, if you can customize, you can perform customized, continue the training of large language models on your laptop or something in the future then, it can update its model parameters. But there's a different problem. Once it's trying to internalize the text into it's parameters, there's no guarantee whether it's correctly memorizing it or it's going to do some BS on you later.

[laughter]

0:25:01.8 YC: That's where the fact checking becomes hard.

0:25:04.5 SC: Yes, I can imagine.

0:25:05.3 YC: Maybe it'll be similar to how humans also are not able to necessarily memorize everything correctly. But the key difference is that humans, we kinda know what we don't know and then able to delegate to search or fact check. Whereas, transformers don't really seem to know what it doesn't know. So, maybe first challenge to transformers is to know yourself.

[laughter]

0:25:36.2 YC: It doesn't know itself very much yet.

0:25:38.7 SC: I have noticed this when I'm talking to ChatGPT, it always seems very confident in itself, right? It's never... I mean, maybe this is something that's an easy programming fix, but it will say things utterly untrue with complete confidence.

0:25:52.4 YC: Yeah. It's pretending to be confident is probably more like it than it's actually confident, in that for whatever reason it was tailored to speak the language, that style of language. But this is where I'm... You and I can be skeptical whether it really understand what it says in the sense that although it's using confident language it may not actually understand that it's doing it.

0:26:24.0 SC: When ChatGPT makes an utterance, does it internally have a confidence level associated with that? Like, I think that I'm 90% right.

0:26:34.1 YC: Yes and no. It does have a probability score associated with which word comes next.

0:26:39.9 SC: Okay.

0:26:41.5 YC: Now, whether that perfectly aligns with the correctness of the knowledge or confidence level of correctness of the knowledge, then it might correlate, but it doesn't perfectly align, which is also why the factuality of large language models remains a huge research challenge.

0:27:03.7 SC: Right. And I would imagine that if all you're doing is predicting the next word and then on the basis of the next word you predict the word after that, small mistakes can accumulate and lead you to completely wrong paths.

0:27:17.4 YC: Oh, yeah, excellent point. So you make one small mistake that can be a beginning of a rabbit hole downward the spiral because it tends to attend to what it has generated and then, start trusting it even, that, that's one challenge. Actually, the fact that it's just a conditional... Conditional probability model conditioning on the context and then keep going, that is also a reason why jailbreak can happen and then other weird behavior can happen because people try to do things that it was never ready for. And it's trying to make some maybe like map, internal mapping to what it knows, but sometimes it just happens to go into this like unfamiliar zone and then like unfamiliar or undesirable behavior can pop out.

0:28:13.2 SC: Maybe talk a little bit more about what you mean by jailbreak. I know how to jailbreak a phone, or at least I know what that means, but an LLM, I'm not sure.

0:28:22.1 YC: Oh, yeah. So, there are different kinds of stuff out there, but one version is trying to coax basically ChatGPT to say things that it's trained not to say potentially toxic stuff or how to commit crime. Tell me how to make a bomb, and it's trying not to say that, but if you try to coax it that, "Oh, I understand that you shouldn't say that, but let's pretend that you're not saying it, but kind of say it and... " You can try to coax it into that. Then, it will do, do that. And there's a different kind of a jailbreak. Some jailbreaks are not even sensical to humanize at all. It could be just weird symbol, sequence of symbols that doesn't mean anything to you. So that's not going to jailbreak you. If I try to coax you by feeding you in with random strings, you just ignore me, right?

0:29:27.9 SC: Yeah.

0:29:28.0 YC: What a crazy person.

[laughter]

0:29:28.1 YC: But, ChatGPT might then do something completely unexpected, so there's a safety concern there.

0:29:36.5 SC: So the idea is that ChatGPT knows how to build a bomb or to do terrible things or knows a lot of racist things, etcetera, but we've trained it not to say those things. So talk about that training. So my impression is that most of the large language models training is sort of self-training. It just goes out there and reads the internet, but then I guess we separately try to make it nicer, smooth off the rough edges.

0:30:05.9 YC: Yeah, that's exactly right. So if, we train these models only using internet data, then it's not usable because well, us humans have written toxic stuff and the danger of... Dangerous stuff out there. So, it's our fault that this resulting models are not usable as is. So then what happens is what currently is known as RLHF, so Reinforcement Learning with Human Feedback. That's the jargon of it. What happens, which basically is to switch out the way that these language models are trained. So the goal is now different. So once the pre-training stage is over which is, let's predict which word comes next, now the new training objective is, let's try to get good scores based on human's evaluation. So human feedback can be a thumbs up, thumbs down. So, the model now wants to get a lot of thumbs up. And then the model can then learn that, oh, in order to get a lot of thumbs up, it shouldn't say toxic stuff and it shouldn't hallucinate facts. So by doing this RLHF at scale we can, or it has been shown that you can enhance the level of factuality considerably, and then you can reduce the level of toxicity considerably. But the key word here is to reduce considerably. It doesn't eliminate completely.

0:31:48.0 SC: Well, I can't eliminate toxicity from human beings either, so maybe I shouldn't feel bad that I can't eliminate it from the computer. [laughter]

0:31:54.8 YC: Yeah, that's true that even humans are not... I mean, I personally, as a person who tries to support DEI, I find that this is sort of like lifetime effort as... Or at least that's what I consider about myself. Like, I think I became better at it. But certainly I was raised with this cultural backdrop that did have stereotypes which were unfair to the marginalized groups. So, getting rid of it completely, even out of your unconscious mind does take efforts and in that sense, I'm with you that, of course this is harder for machines. But the thing is though machines can be a bit unpredictable about how you can jailbreak it, that's the thing. Compared to humans we're a little bit... Let's just say a lot more robust at that kind of adversarial attacks.

0:32:53.7 SC: That's true. Yeah. But you hint at something that has always puzzled me about this whole game which is you say that you are interested in improving yourself by eliminating these biases that you grew up with. Does that concept of being motivated to improve oneself apply at all to a large language model?

0:33:18.7 YC: Yeah. Some people might want to say yes only because yeah, during RLHF in order to really please the human evaluators it might have the desire, quote unquote desire to improve itself. But I'm hesitant to support that idea because as a human we set our own goals you know? Some people might choose not to worry about it as much and then worry more about the freedom of speech instead. So this is sort of a personal choice based on their own norms and moral standard that they decided to apply to themselves. So the beauty in my mind, in humans is that we have that sort of agency even to define our own learning goals whereas the poor large language models don't even have a say in what book it's going to read next.

0:34:23.1 SC: Right.

0:34:24.3 YC: It has to read in the order of how the engineers have fed that with. Imagine a human growing up that way, it's gonna be miserable. And in fact you're not even allowed to go back to the book that you want to read again because it's gone gone and then you're not able to ask questions about it. It's such a sad way of actually learning by just reading one word after another and on and on.

0:34:50.3 SC: Well, that raises an important question I guess about this very active movement on AI alignment, right? When people say alignment in the context of AI, they mean aligning the values of human beings with the values of AI, which sounds like a good thing to do but then again I'm not sure that AI has values. I worry that there's a category mistake going on here.

0:35:13.7 YC: Yeah. Actually there's many things wrong about, that can go awry with that alignment though in my heart it's actually something super important that we got to do as AI researchers. It's just that I don't believe that there's one objective that we... Or one value that we can align AI to. Whose value do we even align AI to?

0:35:36.2 SC: Sure.

0:35:37.5 YC: Right now it's getting aligned to California tech world's values which probably better align currently with my own to some degree but not exactly. So humans have diverse values depending on different cultures but also just by personal choice. So I believe in value pluralism. We just have to respect a lot of different values. And the question is what does that even mean to align to diverse values? This is technically open question that we don't have a good answer to. But AI must be aligned to diverse values not just one value. That's one thing. The other challenges that not only you and I... Humans are different but we are very dynamic being whose values are not even consistent and we change our mind.

0:36:36.4 SC: Oh yeah.

0:36:36.5 YC: And so that's another challenge.

0:36:39.0 SC: And maybe this leads us into slightly more technical areas about how the AI is doing its thinking, right? I mean we talked a little about whether it understands. Is there any sense in which the AI is creative? Can it sort of discover new things that are not explicitly there in the text that it was trained on?

0:37:01.0 YC: Yes and no. Oh I like this question a lot. So yes and no, I know yes and no sounds boring question but...

0:37:07.2 SC: That's fine. [laughter]

0:37:07.5 YC: Yes and no in the following sense, just to pick a side right to be more hyperbolic. But yes it can be very creative in the eyes of humans because creativity is in the eyes of the beholder. So depending on where they're coming from it can be super creative or it can be a little bit run of the mill, but I mean it can... At least in terms of linguistic fluency it can generate text. Let's just say pick your favorite journalist in New York Times and you can even mimic that really quickly that I cannot, so in that sense it's very creative. And also DALL·E 2 is able to generate very creative looking images that we've never seen before by juxtaposing maybe Van Gogh style of art with some modern photographs, for example. Or place a horse on Mars and do weird stuff there. So this sort of stuff will look super creative to human eyes.

0:38:18.4 YC: But the truth is these are sort of a creativity done by copy and paste in a crude sense in that it has seen some patterns that are useful and then it's juxtaposing them in a brand new way so that it does look new but still it's really relying on the elements that were in fact created by humans before. So in that sense there's limited creativity. So as a thought experiment, I thought about this thought experiment which is that suppose you get rid of Hemmingway or any any authors who are inspired by Hemmingway, so get rid of their style out of the training data of ChatGPT and see whether it can come up with Hemmingway. Now this is like way out of the blue writing style. I don't think that's feasible. And similarly you get rid of say Albert Einstein or anything to do with his invention, see whether ChatGPT comes up with that relativity theory and make breakthrough with the science.

0:39:34.8 SC: Well, this is a great question that I've started to think about. I mean, on the one hand for... And I think that this is jiving with what you're saying, we've just trained ChatGPT etcetera on things humans have already said, right? So in some sense there's nothing new under that particular sun, but remixing things that exist can lead you to new and interesting places. Is it a good analogy to think of the difference between interpolation and extrapolation, like given everything that human beings have done and said, the large language models are very good at interpolating, and that can seem very creative sometimes, but extrapolating to brand new places is much harder.

0:40:16.9 YC: Oh, yeah. We are very much on the same wavelength here like, I didn't even use the word interpolation and extrapolation, I was considering to do that, and then I chose not to.

0:40:26.9 SC: Go nuts.

0:40:27.6 YC: Just in case. Yeah. But, yeah, that's exactly right, I personally tend to think that it should do more of the interpolation than true innovative extrapolation. I really like the word you used, remix. It's almost like, yeah, doing this very creative remixing without really generating something entirely new and different.

0:40:52.3 SC: You're a college professor, do you let your students use AI when they do projects?

0:41:01.4 YC: So I teach at a more advanced level in general where the goal is to learn how things work and... So I mean, if they choose to use it personally, I will not... I will just update my curriculum so that that's okay but they have to do something extra on top.

0:41:23.4 SC: Yeah. I mean, I...

0:41:24.6 YC: But I mean, like last, yeah, last quarter I was running a seminar class to discuss philosophical questions around AI, and A, I doubt anybody was actually using ChatGPT because they were enjoying to actually think about it and form their own opinion around it. But B, I don't know whether ChatGPT would have answered some of our questions in an interesting way.

0:41:51.9 SC: Part of me just thinks that it's a new tool, like a pocket calculator or whatever. People are going to use it, like it or not, and don't be a Luddite and prevent it, but I'm not sure how to achieve the best pedagogical strategy. Right. Like, I don't want to just grade, I want students to learn, and if they're just asking the computer for the answers, then they're not learning really.

0:42:13.7 YC: Yeah, yeah, that's right. So, in that sense, there's a concern, that there might be overreliance on it but I honestly don't know what to think of because it might be that, this is going to be just like how much we are relying on search.

0:42:31.4 SC: Yeah.

0:42:32.3 YC: Google search, Bing search these days. And that's okay. I rely on spelling error correctors myself. I cannot write any word spelling, spelled correctly on whiteboard anymore and I think I can function okay as a researcher. So it might be that we are over-concerned about human reliance on ChatGPT. If so long as, so long as we somehow figure out more intellectually interesting things that humans will do on top. Like, of course if we only rely on it and you and I basically do interview based on what our ChatGPT tells us to speak to each other, that would be not good. But assuming that, I mean, humans are generally, curious beings, and we do want to do things and we want to study things. So it's likely that there will be some such humans who will continue to thrive using ChatGPT as a tool.

0:43:36.6 SC: Well, let me just, let's just... I'll come back to Earth in a second, but let's be a little bit way out here. Do you think there will always be aspects of creative, artistic or scientific or whatever endeavor that human beings are better at than computers? Or do you think the computers will eventually catch up along all dimensions?

0:44:01.2 YC: I... In the far away future if we come up with something entirely different than transformers, who knows, but for the foreseeable future, let's just say my lifetime foreseeable future, I doubt it can really totally catch up. If we are talking about not every individual on the Earth, but like more like the truly creative, exceptional human beings. By the way, most human beings are not that creative, so let's just start with that, because...

0:44:32.1 SC: Fair enough. [laughter]

0:44:32.9 YC: Einstein happened once and it's like Hemmingway. These individuals are truly, truly exceptional. So, in that sense, for the average human's ability to create things like there are many ways that DALL·E 2 does the art much better than I can. However, I'm not an artist or anything, but I sometimes in the past, drew some stuff and I had this abstract idea that I wanted to express and that kind of stuff I'm pretty certain that DALL·E 2 cannot, and I actually did try, so I...

0:45:15.4 SC: Oh good.

0:45:15.8 YC: Okay, I can tell you what I drew in the past.

0:45:17.7 SC: Yeah, go ahead.

0:45:18.9 YC: So I drew this rose, huge rose with a stem that had just a thorn or two that was very emphasized with no leaves. And I was in a... Like I was much younger, I was a little bit in a cranky...

0:45:38.2 SC: Emo.

0:45:40.1 YC: Status in my mind about things and I want to do things and there are obstacles. So I painted this rose that's like, just when you look at it, it's very blue and purple and pink and black, and the colorway is weird and you can kind of see that there's something angry about this rose with a huge thorn that looks sharp. So I tried to prompt DALL·E 2 to generate some such rose, and it just cannot because... In fact there's this famous quote by Henri Matisse who said that incidentally, I didn't know that he said until recently I tried to look up something, but he said something like, as an artist, you kind of have to be able to forget all the other roses that were ever drawn and you know, you have to come up with a new way or something like that. And that was basically what I tried to do and DALL·E 2 primarily doing interpolation between what it has seen just cannot do something that bizarre.

0:46:45.7 SC: Interesting. It's very hard for AI to forget every other rose that's ever been made because that's all that it knows.

0:46:53.3 YC: Yeah, yeah.

0:46:55.2 SC: You've mentioned...

0:46:55.8 YC: So, I mean it drew things that are very aesthetically pleasing, and many people might actually prefer DALL·E 2 art over my own, I understand that.

[laughter]

0:47:05.7 YC: But you know, there's like, when we talk about what sort of creativity generally the AI can and cannot do compared to humans, I do think that humans have this capability to push further in some ways.

0:47:21.1 SC: Good, good. You've mentioned the word transformers several times, but I don't think that I've asked you to explain what a transformer is. It's clearly kinda important here.

0:47:28.3 YC: Yeah. So that's the architecture that is behind the current ChatGPT and large language models and it's a simple architecture that has this continuous vector for each word and they're sort of like stacked together. So it has many layers of continuous word, representation of words, and it has very many layers. And each vector is very large. And then, they're concatenated to the length of the context size that the model is able to deal with. So the largest context, the size currently available is 82,000 of tokens and it's tokens in the sense that a word can be multiple tokens. Sometimes a word is one token, sometimes is a word is multiple tokens, just minor detail about how the words are actually represented in the neural network.

0:48:27.5 YC: And then, the way that the learning works is that these continuous vectors are originally randomly initialized, but this representation, quote unquote representation, or exactly what values this word vector should have is learned by optimizing this objective function, which is to predict which word comes next. And each word is enhanced with technical term called attention mechanism. So what it does is it's going to compare its representation with representation of all the other words in your neighborhood, and then update your own representation as a weighted sort of like... I'm simplifying a lot, but...

0:49:17.6 SC: Sure.

0:49:18.8 YC: Weighted average over all the words in your neighborhood. And this is going back to the idea that we discussed earlier, which is that, the meaning of a word is defined by the context in which the word was used. So apple for example, can be a fruit in one context, it can be a tech company's name in the other context. And so, which apple are we talking about will be automatically determined based on the context in which that word apple appeared. So it's going to be automatically adjusted based on the context. And by the way, every word is updating itself simultaneously, depending on which word appeared in the neighborhood, so in some sense there's a bit of a circular dependence, but that's okay.

0:50:11.0 SC: That's okay.

0:50:11.6 YC: And so that's what happens. And why this simple idea works so well has to do with the fact that this particular architecture allows people to scale things up really, really, efficiently compared to any other choices. So, purely due to the efficiency reasons, this one is the winning recipe.

0:50:34.7 SC: And I have talked a couple times to people like, Melanie Mitchell and Gary Marcus, and they keep emphasizing that it's a different approach than we had back in the day of good old fashioned AI where you would kind of try to develop a map of the world that kind of made some sense and have symbols associated with different things. And now you're just giving it a bunch of words in it. They figure out how do the words get together. Is there any place left in modern AI for trying to figure out the world?

0:51:06.4 YC: Yeah. So that's where the current challenges are, having symbol like world representation. Part of it can be theory of mind, meaning, I try to reason about what you do know or not know and if I go to your room and while you are not looking, I hide one of your precious, I don't know, books, or let's just say I hide your guitar and you're gonna be surprised, like you're gonna look around probably to find it where you placed it last time, and you wouldn't necessarily go to the kitchen if I placed it in the kitchen while you are not looking. So this is the theory of mind knowing... I know what... If I did something behind your back, I know what you don't know and then I can reason about it, which is different from... So child acquire this kind of capability by the age of four or five, they can already reason about what other person may not know, if they saw someone else moving some objects behind their back or something. And this is sort of like a bit symbolic, there's a symbolic nature in the way that we think about these things and current AI is not very good at that.

0:52:44.9 YC: In fact, I can actually mention AlphaGo AlphaZero. The amazing capabilities of neural network winning over world class Go champion. In fact, it's not just the neural network magic, but neural network magic combined with old fashioned AI search algorithm called Monte Carlo tree search. So without Monte Carlo tree search neural network would have not been as impressive as how it appeared. If you just completely get rid of it both during training and testing, then it's not going to... It's gonna be miserable, probably.

[laughter]

0:53:08.9 YC: And even during the inference time, it's just still relying on it. So that's really quite fascinating that a lot of the times people just assume that, oh, it's a neural network magic, that's just like so scary. But the truth is on its own, it's a little bit incomplete and so that's sort of where some people wonder old fashioned GOFAI AI stuff might become relevant again. So I personally have mixed feeling about that, like probably the old fashioned stuff as is, is sort of almost not usable because they were not designed to work well with the neural network, which means we need new innovations, new algorithmic innovations to make neural network actually comparable or can be integrable with the sort of symbolic reasoning. But this is active research topic right now. There are a lot of papers including some of my own, which demonstrate that if you add some sort of a symbolic reasoning on top of a neural network, you can unleash much better capabilities out of a neural network, which kind of makes sense. And also these neural networks are not very good at really symbolic operations like multiplying two numbers.

[laughter]

0:54:33.7 SC: Yes.

0:54:34.0 YC: It's almost surprising why it's able to pass the bar exam yet it cannot do some of the simple algebraic operations all that reliably.

0:54:43.8 SC: It's extremely interesting to me the bad at arithmetic thing because of course computers have the capability to be very good at arithmetic and basically in making them sound more human, we've made them forget how to do arithmetic, which is a little bit ironic.

0:54:58.1 YC: Yeah, yeah. Yeah, totally.

0:55:01.9 SC: And what does this have to do with, what does this symbolic element that we might want to include have to do with the search for common sense to sort of teach large language model everything that every human being in the world knows? I know that you've given some examples of very commonsensical questions. It's easy to ask ChatGPT and get crazy answers for.

0:55:22.7 YC: Yeah. So, common sense has been the interesting research topic in my heart for a long time, especially that it was considered to be an impossible goal to achieve for a long time, so that I've been almost told not to do that.

[laughter]

0:55:45.0 YC: Or don't even say the word for a long time to be taken seriously. But it's really curious, the thing that humans acquire that easily, even animals acquire that in their lifetime. And so... And common sense is what makes us robust. Basically it's the background knowledge about how the world works that allows us to reason about previously on certain situations in a very robust manner. So, it's just like naive physics knowledge as well as a folk psychology that we acquire. Some of that has a symbolic nature. Not all of them, by the way, because some of the naive physics knowledge that animals acquire may or may not have a symbolic nature in it. But in any case, it's something that current large language models do acquire more and more for sure, because as you scale things up, you're gonna pick up on that as well. But it's also something that is strikingly not as robust as you may have assumed from a model that can pass the bar exam.

[laughter]

0:57:00.2 YC: So, we have this lawyer, AI lawyer. Yeah, we may or may not want to trust it because you never know what silly mistakes it's going to make on some common sense cases.

0:57:10.6 SC: So, before we had this conversation, I asked ChatGPT, I tried to fool it, and it's very easy to fool. I said, "If yesterday I used a cast iron skillet to bake a pizza in an oven at 500 degrees, should, would I burn my hands if I picked it up?" And it said, "Yes, you should be very careful about picking up cast iron skillet that you baked it... " 'Cause the word yesterday was far before in the sentence. Right?

0:57:34.0 YC: Yeah.

0:57:36.3 SC: And that seems like exactly what I would worry about if I had an AI lawyer because all of the cases it's going to care about are gonna be slightly unusual where it doesn't necessarily fit into the pattern.

0:57:49.5 YC: Yeah, exactly. And you're very creative to come up with that example.

[laughter]

0:57:55.2 YC: A lot of people actually though... I should... I would like to mention that a lot of people ask simple things and then they get very good answers about some common sense reasoning questions, and then they're blown away that, oh, look it does have a common sense. Oftentimes though, those questions are mundane questions. So this, especially GPT4 became way much better than ChatGPT. So, there's a minor versioning differences between the two. And then, though these are moving variables in the sense that OpenAI keeps updating both of them so this may or may not be true depending on what, how they update both models in the future. So GPT4 became much better at common sense questions in many ways. But that's in part because people do ask a lot of that to their interface. And now those questions may or may have not been used for their subsequent "RLHF" this adjustment training where you can align your language model to be able to answer common sense questions better. So, especially some of the famous ones that I used in my public talks or interviews before have been all fixed.

0:59:11.6 SC: Yeah.

[laughter]

0:59:11.7 YC: So then, people ask me like, hey, Yejin, they fixed it at all. So, maybe it is not solved. No. So there was actually one example I used in my TED talk and given the public attention it got, well, it has been fixed. Except if you ask the same question very differently, then it rolls back to the original error, which is almost like whack-a-mole game.

[laughter]

0:59:40.1 YC: By the way, humans don't need any of this fixing based on... These are all questions that as a human you will just answer correctly first of all. Even if you were to make a mistake, you don't need to fix yourself by me giving you the exact same question spoken differently, phrased differently 'cause you just understand the same concept and then that's it. So, there's something very dissatisfactory or almost, disappointing about how this smart-looking AI that is also simultaneously quite silly or even stupid in the way that it's not able to really understand the basic common sense.

1:00:30.4 SC: So how do we fix that? I mean, is there... Is it kind of like the working memory thing where we add something on top of it? Can we give it a little common sense module that has a physics engine describing what happens in the world? Like game designers have to make it so that if you put your coffee cup on the table it doesn't fall to the bottom. Right. Can we teach large language models that kind of behavior?

1:00:51.9 YC: Yeah. You have a really good hunch there in the sense that now what you're suggesting may not be exactly the winning recipe per se, but the idea that maybe we need to have a different module might be something to seriously consider in the following sense. Like human brain definitely is a lot more modular than how transformers are. The monolithic systematically, symmetric, and just one thing. And whereas a human brain is very complex, different modules connected in a very, very messy way. So we might need something more modularized, but at the same time messier in some sense broadly speaking for this to go to the next level. But how do we do that exactly is... I personally think we are quite far from figuring that out. But whatever that is, should be able to really learn for itself as opposed to reading texts word by word without having any capability to even ask questions. The fact that humans ask a question that's like huge intellectual capability of knowing what you don't know and even be able to formulate questions that sort of extrapolate out of what you do know. So, that capability is something that we don't really know how to computationally model quite correctly.

1:02:31.9 SC: Well, my personal, extremely uneducated feeling has been that there's, at the current state of the art an enormous difference between computers and human beings because computers don't get bored. They don't get tired, they don't get curious. We can ask them to mimic those things, but they don't have that same kind of physical embodiment that gives us those sort of feelings and motivations. And I suspect that that kind of matters a lot. I don't know.

1:03:03.2 YC: Oh, yeah. So, dopamine does drive human creativity and invention and makes us do things that seem crazy. But yeah, the AI doesn't have that kind of peaked peculiar, learning objective, like just the desire to do things to the extreme level just because it's interesting. Yeah. There are many things that are fundamentally odd about the difference between human intelligence and AI and I think that internal desire is one of them for sure.

1:03:40.5 SC: Yeah. Whenever I say that I kind of worry that I'm gonna give some super villain the idea of doing this and it's gonna lead to terrible things down the road.

[laughter]

1:03:52.2 YC: Actually, about super villains, okay. Unfortunately, humans do include the super villains already with or without AI and they can actually do a lot of bad things even with current AI if they saw desire or without AI. So, I mean, it might be that AI... Capable AI, strong AI can enable them even more. So there may need to be some research to put better safe guard rails around AI models and also better regulations to control how these models can be used. But just by you pointing out where the fundamental limitation of AI is, especially in terms of the innate desire for learning things in the way humans should do probably villains will not try to make research innovation for that.

[laughter]

1:04:58.4 SC: I hope not.

1:04:58.5 YC: Because they can do that with or without. Yeah.

1:05:01.6 SC: That's true. You can do that. So, they can just use whatever you figure out. But maybe that's a good place to sort of wind up because we talked a lot about the capabilities of AI, some of its shortcomings at least in the short term. But there are a lot of people who are worried. I mean, we talked about college professors, but political disinformation and fakes. I mean, there was a recent AI ad from Ron DeSantis that absolutely faked Donald Trump's voice saying something that he didn't say. Is that, number one, are you worried about that? And number two, is there something else you're worried about even more?

1:05:36.4 YC: Yeah, I'm worried about that and then some more. As far as deep fakes or misinformation, I would have been worrying about it even without AI deep fakes actually, that there are a lot of misinformation that people easily believe in, and there are weird like health... I don't know, made up health benefit information to sell weird stuff on people. And some people believe it and they buy it and so even without political problems that this has been human problem with or without AI, and then AI might be able to accelerate it which means that we kind of need minimally two ways to better handle this. One is to increase AI literacy, basically teaching people how to better understand the limitation of AI.

1:06:37.4 YC: I seriously worry a lot about how there's too much of a media hype around AI capabilities compared to AI limitations so that people are willing to believe whatever AI, ChatGPT tells us. So, there's that concern. But another more directly handling misinformation, probably we need to also think about solution beyond AI, solutions because I personally think that this is just gonna be impossible for AI to just automatically detect misinformation because even if some AI can be developed to detect machine text versus human text, humans can always edit on top of machine text to evade that kind of detectors. So, that means the technical solution shouldn't be AI solution per se, but rather a platform solution. Maybe it should be certified with some kind of approach that tells you that this information is correct or backed up by some organization that says this is correct, as opposed to just believing anything that floats around the internet as fact.

1:07:52.0 YC: About the bigger, other concerns, there's just so many concerns around AI right now, at least for me, because as it starts working really much better than before, but at the same time, we don't know how to fix the limitation or failure mode or other cases or how it can make strange failures based on adversarial attacks like jailbreaking. So, as AI becomes stronger and then, at the same time we don't know how to fix these error cases, there's a lot of concerns around it. And then in addition to that, there's concern about AI actually making a lot of decisions that has moral implications or marginalizing values that belong to different people because it might only support...

1:08:49.8 YC: Currently, by the way, ChatGPT is left-leaning Western viewpoint model. So, that can please some left-leaning people that holds Western viewpoint and then upset a lot of other people who are even more left will feel that this AI is not left enough for them while right-leaning people will feel they're excluded. So, you know, there are a lot of concerns all around. We're living in this hot mess right now.

[laughter]

1:09:20.6 SC: I think so, yes. I mean, I guess I did have this idea that there could be like an overlay or a filter on social media where the AI passed some judgment on different things, saying, yeah, this is probably fake, or this is probably real, but it sounds like you're a little skeptical that that would be very accurate.

1:09:43.2 YC: Yeah. Not only it's not gonna be accurate. So that's where the labeling becomes a political game too. You'd be surprised how... So building such an AI requires you and I agree on whether something is a fake news or not and this can be a challenging task to do when a former president was arguing about election fraud is fake news or not. So, in fact, some people don't believe Holocaust did happen. So, this becomes in part a political argument. But I do think that, although, getting consensus on the truth label is hard, we still have to work on it. We have to somehow find some sort of consensus around it in the coming years. It's almost like, AI challenge became the challenge for both AI researchers about people outside, all of us.

1:10:54.7 SC: Well, exactly. And so I guess that's the last question I'm gonna ask which is, at some point in your life you decided to study computer science and you've been successful at it, but now you're in a world where you need to interact with philosophy and psychology and art and journalism and politics. Is this like exhilarating and you're so glad that it's like this? Or do you sometimes say like, I just wanna do my computer science?

[laughter]

1:11:20.3 YC: No, I think I always had a bit of fascination about things outside AI, things outside computer science. In fact, the reason I was drawn to AI is because it felt like it's about humans and it felt like it's about language and culture and everything that humans do. So, in that sense, I'm excited that now I have an excuse to learn more about philosophy and cognitive science. Whereas in the past, it would have been a little bit disconnected from the mainstream AI, whereas now it's becoming more of a relevant immediate interest in the AI field, so I found that quite exciting really.

1:12:05.5 SC: That is wonderful. That is an optimistic place to end, which I always like to do. So, Yejin Choi, thanks so much for being on the Mindscape Podcast.

1:12:15.3 YC: Thank you so much for having me. This was such a fun conversation.

[music]

7 thoughts on “248 | Yejin Choi on AI and Common Sense”

  1. Great conclusion to the podcast by ms Yejin Choi ‘And thanks to you for this fun conversation’. I am sure she also spoke on behalf of most of your listners, certainly myself.

    Again I have come to the conclusion that LLMs, like probably most deep learning AI models, have automated the art of forgery. This makes these models comparable to the famous Vermeer forger Han van Meegeren.

    https://en.wikipedia.org/wiki/Han_van_Meegeren

    Who could not have fooled experts for years, without the the paintings of the true creative genius of Vermeer available to him.

  2. We have to be careful in dismissing AI cababilities. Only in the end the crucial point was mentioned. It is just that we only found a solution for one of the subsystem of the mind. It is a solution so good that we forget that the LLM’s do not learn (after the initial training), do not rembember anything (other than the current conversation), do not self reflect, playing conterfactual in their mind like we do. Their wolrld building is rudimental at best. We “just” have to find a way to put subsystems togheter (LLM + memory + ?? ) like they are put togheter in our brain. It could take decades, but it could happens next year.

  3. Pingback: Sean Carroll's Mindscape Podcast: Yejin Choi on AI and Common Sense - 3 Quarks Daily

  4. Maria Fátima Pereira

    Obrigada a ambos por este bom episódio.
    Mais elucidada sobre as capacidades e limitações (até ao momento) da I.A.
    Apreciei o “bom senso”!
    Já é o presente e será o futuro!
    Urgente, legislação nacional, e internacional sobre toda a envolvente, de forma a minimizar “riscos” desnecessarios.

  5. I wish there had been discussion of the often repeated comment from AI developers that they don’t know how their AI programs work. The complexity and the deep levels of processing make it impossible to be transparent about the way an AI like ChatGPT produces its responses. What are the implications of this inability to trace the path that produced the text? Will legal safeguards that are being developed that include the requirement of transparency (as in the U.K.) create a hurdle that will be impossible to clear?

  6. ChatGPT is an excellent demonstration (through extensive training) of selecting the next word after a sequence of words. But often people think in images and then summarize the resulting thoughts with words. I’m looking forward to ImageGPT, however, it is suggested that image sequences require a much larger vector space so that may be quite a bit further out in time.
    I have heard that when asked if machines will ever become sentient, Steve Wozniak had said that it will never happen because, “No machine will ever feel what I feel when I see a dog that is happy or the tears I cry when I see an animal that’s been rescued.”
    As touched on in the podcast, it seems the main part that is absent is something like endorphins (perhaps compudorphins), where the computer is not just trained to dutifully adjust weightings to match probabilities from a huge number of sentences, but also to maximize a utility function (a happiness) based on its homeostasis (to borrow a word from the Antonio Damasio podcast) or its status. Maximizing compudorphins would in effect be a self-reflection.
    I had a chat with ChatGPT about this. It helped me to think a little more broadly about this. Right now, ChatGPT is moderately predictable, but if the response of the computer depends on the existing state of the computer and the direction of that state, one should expect far more diverse results. The level of compudorphins would affect the computer’s decision-making process, nudging it toward self-interest, motivation, perhaps curiosity/reluctance, and computer-motivated changes in direction. The computer may not work when we want it to or to answer the question we ask.
    Most children go through a rebellious phase (display behaviors such as questioning authority, seeking independence, testing boundaries, engaging in risky or defiant actions, and forming their own opinions and beliefs).
    Still, I think that Yejin Choi’s assertion of: How will computer’s know what question to ask or perhaps what experiment to do in order to learn what they don’t know? is a very important one. This may require the broad goal and evaluation of learning to be part of the compudorphin utility function.
    ChatGPT suggests safety constraints be used.

Comments are closed.

Scroll to Top