Episode 46: Kate Darling on Our Connections with Robots

Most of us have no trouble telling the difference between a robot and a living, feeling organism. Nevertheless, our brains often treat robots as if they were alive. We give them names, imagine that they have emotions and inner mental states, get mad at them when they do the wrong thing or feel bad for them when they seem to be in distress. Kate Darling is a research at the MIT Media Lab who specializes in social robotics, the interactions between humans and machines. We talk about why we cannot help but anthropomorphize even very non-human-appearing robots, and what that means for legal and social issues now and in the future, including robot companions and helpers in various forms.

Support Mindscape on Patreon or Paypal.

Kate Darling has a degree in law as well as a doctorate of sciences from ETH Zurich. She currently works at the Media Lab at MIT, where she conducts research in social robotics and serves as an advisor on intellectual property policy. She is an affiliate at the Harvard Berkman Klein Center for Internet & Society and at the Institute for Ethics and Emerging Technologies. Among her awards are the Mark T. Banner award in Intellectual Property from the American Bar Association. She is a contributing writer to Robohub and IEEE Spectrum.

0:00:00 Sean Carroll: Hello, everyone and welcome to The Mindscape podcast. I'm your host, Sean Carroll. And today we're gonna be talking about robots. We've talked about robots before on Mindscape, it's a natural topic to imagine, but usually we're talking about what kinds of robots there are, what they might be doing -- today we're going to be focusing on the human side. What do human beings do, what should they do, when they interact with robots? How should we think about robots when we're dealing with them?

0:00:26 SC: Today's guest, Kate Darling, is a researcher at MIT's Media Lab. She's not an engineer who builds robots; she actually comes from a social-science background. She's interested in what we should do with the fact that we tend to treat robots as if they are human beings. We anthroporphize them. We assign, attribute, feelings to them, and emotions, and ideas to robots, even when we know they don't have them. You don't need to be looking like a human being, as a robot, to get another human being to treat you like a person. We treat our Roombas like people, our laptops like people. So what's up with that? What does it mean for the future? This is an area where, as very often when technology is advancing, the legal system, and the philosophy, and how we think about these things, lags behind. So anything we can do to get into our brains what's going on ahead of time will be very useful.

00:01:20 SC: There's one other thing, a big announcement that I have and wanted to share with you, which is that it looks like Mindscape will be getting ads, getting advertisements at some point. We're not absolutely sure, and I don't exactly know when that's going to happen, but it seems to be the direction in which we're moving. I don't know when it's going to happen, but I'll let you know ahead of time. I did go back and forth on this, I will be honest. Because obviously, ads are good because they get money, and I like money. I will never, you know, try to pretend otherwise. On the other hand, you have to admit, they kind of get in the way of the actual substance of the podcast, I won't try to pretend that either. What pushed me over the edge into agreeing with the ad thing was the idea that it could get a lot more listeners. It's not that I'm just getting ads, it's that I'll be joining a network that will publicize the podcast elsewhere, bring it to new audiences. And that, I really think is important. Maybe I'm flattering myself, but I enjoy what we're doing here. I think it's interesting, it's important. I want to share it with as many people as we can. A big part of that is just getting the word out, and being part of a network would be a huge benefit to that. It won't change the substance, it won't change who I get, how long the podcast is, what we talk about -- any of those things. It will just be a new platform on which the podcast will appear. So I'll keep you updated on that.

00:02:39 SC: So who knows? Robots! Maybe the robots will be reading the ads. Maybe someday all podcasts will just be invented by robots and then consumed by robots. Maybe the human beings will just rake in the bucks at the end of the day. I don't know, that's what we're here to find out. So let's go.

[music]

0:03:23 SC: Kate Darling, welcome to the Mindscape podcast.

0:03:25 Kate Darling: Thanks for having me, Sean.

0:03:27 SC: So you're at the MIT Media Lab and you work on robots. So I think that for many of the people in the audience, they will instantly assume that you spend your time building robots or programming robots or something like that, but that's not what you do. In fact, you come from a legal background. So tell us a little bit about how you got here, and sort of what your job is there in the robotic space.

0:03:48 KD: Yeah, I sadly don't build robots. I'm very bad at that, although I have tried. I made a little solar powered robot that...

0:03:55 SC: Wow! That's more than me.

0:03:57 KD: It moves... It charges itself and then it moves a little bit. And that was my great accomplishment in robotics.

[chuckle]

0:04:07 KD: But yes, it's correct. I have a legal and social sciences background and the way I got here, the short version is I love robots, and found a way to do something with robots. But the longer version is probably that I have always been interested in how systems shape human behavior. And so that's what originally drew me to law and later to economics and then to technology. Because all three of those are systems that shape human behavior.

0:04:40 SC: Sure, but you have a wonderful story that I'm sure you've told hundreds of times about the first time that you started thinking about how human beings react to robots.

0:04:51 KD: Yes, yes. I was still a law student at that point, and I had bought a Pleo, which is this baby dinosaur robot that came out in 2007. It's this Japanese toy. I bought it because like I mentioned, I've always loved robots. This was a really cool one. It had all these motors, and touch sensors. It responded to your touch. It was supposed to develop its own personality depending on how you treated it. It had this inferred camera in the snout. And then one of the things that had was a tilt sensor so it knew what direction it was facing. It knew it was upside down and it would respond to that by mimicking distress and crying.

[chuckle]

0:05:36 KD: And it was a... It was a pretty realistic depiction of being in pain and I thought that was super cool and I would show it off to my friends and be like, "Hold it up by the tail, see what it does." And one of my friends, held it up for a very long time, and it started to bother me. So I asked him to put the robot back down and then I started to pet it to make it stop crying, and that just kind of blew my mind because I wasn't a very maternal person, but also I knew exactly how the robot worked and I was compelled to be kind to it anyway. And so, that really sparked my curiosity about human robot interaction and our psychology around interacting with robots, and I soon discovered that it's not just me.

0:06:27 SC: Yeah, and it's interesting because it's teaching us, at the surface level at least, it's not... That insight doesn't teach us anything about robots, it teaches us something about human beings right and how we interface with the world.

0:06:41 KD: Absolutely, I think the most fascinating thing to me about robotics is what robots can teach us about human psychology, human behavior and interacting with each other.

0:06:52 SC: So what do we know... I mean, can we at this point identify the features that your little dinosaur had that made you connect with it? There's a whole gamut of different kinds of robots from Roombas to self-driving cars. Maybe even we should count the little Tamagotchis, the little pets that you know they sold to kids, and we at different levels we anthropomorphise these for some reason or another.

0:07:18 KD: Oh yeah, and it's not just robots, either. We have this inherent tendency to anthropomorphise anything. You know one of the first thing a baby learns to recognize is a face, whether that's a real face or just an image. And we have this deeply ingrained tendency to project ourselves and our own human-like qualities onto other entities, whether that's our pets that we... We see the dog looks guilty, whether the dog...

0:07:47 SC: Oh yeah.

0:07:47 KD: Actually looks guilty, we don't really know. Or I recently read the example that people will see a monkey yawning, in the zoo and will be like, oh the monkey's bored. When really, it's just showing off the teeth that can rip your face off.

0:08:00 SC: Right.

0:08:00 KD: So, we will often just make all these assumptions about others. Kids develop relationships to stuffed animals. We will... We respond to a lot of different things. One of the things that we respond pretty strongly to is movement. We've been conditioned through evolution to be able to recognize whether something is an agent and moving or whether something is an object. Because we needed to detect natural predators at least that's the reason that the evolutionary psychologists give us but it's definitely true that studies show that we're very quick to detect animal movement and autonomous seeming movement much more quickly than other types of movement. And the thing about robots is that they move in exactly that type of autonomous way. So, robots are physical, they exist in the physical world which we kind of respond to as physical creatures. They move in this autonomous way and then there are different design elements that you can layer on top of that to make people really really respond to robots as though they were a living thing. So the baby dinosaur with a cute big eyes, being...

0:09:14 SC: Right.

0:09:15 KD: One example. But you know people will even have a response to the Roomba like you mentioned. Do you have a Roomba?

0:09:21 SC: I do not have a Roomba, no. Do you have one, are you attached to it? [chuckle]

0:09:26 KD: I used to have one. We did name it, which a lot of people do. And I was just talking to the people from iRobot that made the first, the Roomba, the first real successful robot vacuum cleaner. And they, you know, they say most people name their Roombas they'll send them in to get repaired, and they'll want the same one back, they'll be like, "Oh Meryl Sweep is broken. We don't want a new one, we want you to fix Meryl Sweep." So people really... And people feel bad for their Roomba, when it gets stuck somewhere. So even just a very simple robot that just moves around your floor will cause people to have this emotional response.

0:10:05 SC: So, it's probably multi-faceted right because I think, I don't know anything about Tamagotchis, but I do remember thinking that, hearing about the fad when they came out and people would care for their little virtual pets and they were not embodied right they were just on a screen. Have studies been done about the different roles of being anthropomorphic having a face moving all these different aspects and how they play into how we project organic nature onto these robots?

0:10:36 KD: Well, yeah, there are entire fields that research this. So, the precursor to human robot interaction is probably Human-Computer Interaction, which looked at the ways that people or among other things, the ways that people treat computers, like social entities, and like...

0:10:56 SC: We certainly blame them when things go wrong. Let's put it that way.

0:11:00 KD: [chuckle] Well, we blame them. Yeah, but people are also polite to them. There's this really great study that Cliff Nass and some other people did back in the day, where they showed that if you do a task on a computer, and then you're asked to rate the computer's performance and you're asked to rate it using a different computer, you'll be more honest and more negative about how the computer performed than if you're asked to do it on the same computer because you have this instinct that you don't wanna hurt The computer's feelings.

0:11:29 SC: Wow.

0:11:29 KD: So people are... We are such suckers for anthropomorphizing anything. You know you mentioned the Tamagotchi. There's also that video game portal that was really in like 12, 15 years ago. I don't remember. In the game, you have this companion cube that comes with you, that's just this cube that's just with you in every level. And then at the very end of the game, I think it's safe to spoil it, given how old the game is, you're supposed to incinerate the companion cube to complete the last level. And the game designers were surprised to see that a lot of people would sacrifice themselves instead of burn up the cube just because we do... We do become attached even to virtual things, but physicality is an additional layer to that, because there have been studies that show that people will treat something in their physical space with even more empathy and even more like a social actor than something on a screen.

0:12:24 SC: Interesting, and so I will prod you to give the other example that I've heard you give not quite a study that you did, but you asked a bunch of people to sort of bond with some robots and then you asked them to do horrible things.

0:12:38 KD: [chuckle] Yes, this is with my friend Hannes Gassert, we took the baby dinosaur robot that I now have four of at home. We took five of them, we did this workshop at a conference, I think we had 30 people and we split them into five teams, and gave them each a Robot, and told them to name the robot, and play with it and have them personify it a little. They built hats out of pipe cleaners, and did a little fashion contest, and then after 45 minutes of them bonding with this robot, we unveiled a hammer and a hatchet, and we told them to torture and kill them.

[chuckle]

0:13:15 KD: And it was so dramatic that we actually had to start improvising, because we... We had kind of expected there would be a split in the room, like some people will be like, "Oh sure, I'll hit it. It's just a machine." And some people would be like, "No." And we wanted to escalate the violence, and see if that split of people changed. That was our original plan, but with this particular group, this was all adults between probably 25 and 40 years of age, and they all absolutely refused to hit the robots. So, we had to be like, "Okay, what are we gonna do? Okay, you can save your group's robot if you hit this other groups robot with a hammer." And so they tried to do that, and even that they couldn't, they just couldn't do it. So we were finally like, "Okay everyone, we're gonna destroy all the robots unless someone takes a hatchet to one of them." And this guy stood up and took the hatchet, and the whole room stood around, and we watched him bring the hatchet down on the robots neck and it was very dramatic. People winced, they turned their faces away. We could see this on the photos that we took on our phones afterwards. It was really interesting.

0:14:27 SC: Wow.

0:14:28 KD: And then there was this half-serious, half joking moment of silence for the fallen robot. So, super dramatic workshop, very interesting. Not science, like you mentioned, this was just a workshop. There was so much going on there, but it did make me very curious what would happen to look at some of the factors in a more scientific setting. So, it led to some research that I did later on at the Media Lab with Palash Nandy and Cynthia Breazeal where we were looking at the correlation between people's empathy and their willingness to hit a robot.

0:15:01 SC: Oh, okay, good. Well I definitely want to get to that, but first, this was done at a conference. Did the... The people there were robot people, I guess or what kind of people were they, were they just people off the street?

0:15:12 KD: I don't know, it was... One of it... It was called Lift. It was one of those innovation conferences...

0:15:17 SC: I get it, okay.

0:15:17 KD: That brought together designers and technologists and all sorts of people.

0:15:21 SC: Okay, and part of me wants to say that our immediate reaction, when we're not in the room killing the robots, we're hearing about other people who are reluctant to do it and our immediate reaction is, "Oh, that's silly. It's just a robot." 'Cause we're outside the milieu. But in some sense, it's a feature, not a bug, right? It's part of who we are, that our brains attribute meaningfulness, and moral agency, even to objects that we know are completely inanimate.

0:15:53 KD: I am glad you feel that way. I do as well.

[laughter]

0:15:57 KD: There are some people who say, "This is a bad thing, and we need to discourage it." We need to educate people that they're dealing with robots, and to some extent I think that there are some problems that can arise for example, people's behavior might be able to be manipulated through technology, and we might design technology in order to sell people products or even worse. And so those are some effects that I think we need to be cautious about. But generally I do not think it is a bad thing when a child is kind to a Roomba. I love that people's first instinct is to be kind to another entity, and I think that's part of humanity, and I think that's something that we should encourage and not discourage.

0:16:47 SC: Yeah, and empathy... I had a podcast interview with Paul Bloom recently, and it's clearly been a very good podcast 'cause I keep referring it, referring to it in subsequent podcasts. But he wrote... He's a Yale psychologist and he wrote a book called, Against Empathy, and his point is that empathy is usually very unevenly applied. We have a lot of... It's much easier for us to be empathetic with people like ourselves, and that distorts our view of the world. We're nicer to people like ourselves, and we're kinder, more just, and so forth. And so, instead of being empathetic he wants us to just be rational, about what it means to be moral or immoral. I pushed back on that a little bit because I actually think that empathy is very important to being a rational and moral person, in the sense that it becomes too easy to ignore people unlike ourselves, when we don't try to have empathy with them.

0:17:42 SC: Sure it's easier to be empathetic with people like ourselves, but it's necessary and important to try harder to be empathetic with a wider range of people. I was not at the time, thinking about robots at all though, you're saying that there's some kind of relationship between how people interact with robots, and how they're more widely empathetic, is that true?

0:18:03 KD: Well, that's certainly what our very preliminary research indicated. We did look at the correlation between people's tendencies for empathy, and how willing they were to hit a robot, and there seemed to be a connection there, that people who have very low empathic concern for others that they are much more likely to hit a very lifelike robot. And people who have high empathic concern for others are more likely to hesitate or even refuse. I find Paul interesting because I both agree and disagree with him, kind of like you. I think that, yes, it's true that it can be problematic that we only relate to people who are like us, or things that are like us. It can be problematic that we relate to, robots because we see ourselves in them, and not to certain other people that we don't see ourselves in. We've de-humanized entire swathes of other people.

0:19:10 KD: And of course we don't wanna do that in favor of things like robots that don't inherently deserve our empathy but there does seem to be something to the emotional part of empathy in that it kind of lights a fire in you and makes you care at all. Like you said, I think that it's too easy, if you're a completely rational human to just not even care. [chuckle] And so, I've noticed myself since becoming a mother that I've become so much more empathic generally, not just towards other mothers because I've had this experience, but for example in Boston, in the winter I was walking around with a stroller, and people sometimes don't shovel the sidewalks in front of their houses, and I was like, "What if someone is in a wheelchair?" [chuckle] I suddenly realize how awful it must be to be a disabled person in Boston. And so I feel like my... This emotion that I have extends and makes me try and think of other people as well, in situations where I might not have cared before.

0:20:20 SC: Yeah, the flip-side of both of those things, in your example of having become a mother and the fact that we feel empathetic towards robots is that it points... It reminds us of how involuntary a lot of who we are actually is, right? How much a lot of who we are is a bunch of impulses that comes up from beneath the surface, not rational careful cogitation about right and wrong at a highly philosophically sophisticated level.

0:20:50 KD: Oh, absolutely, and I think that makes it somewhat uncomfortable. It's somewhat an uncomfortable realization, but I think it's very important to understand how our relationships work, why we empathize with another entity, how communication works, and that relationships can be very one-sided. We're learning so much about ourselves by looking at how people interact with robots or even computers. Because we're starting to realize that it's all about ourselves and [chuckle] all about us projecting ourselves onto others, and we might learn more about our human relationships... Human-to-human relationships in this as well.

0:21:29 SC: Right. I mostly wanna talk about that side of things, but you did bring up, you tossed off the comment that we know that robots don't actually deserve our empathy. So obviously, I'm gonna ask, do we really know that? As robots become more and more lifelike do we reach a point where we start saying they do deserve our empathy?

0:21:52 KD: Sean, when I first got to MIT, I was like, "Oh yeah, I'm gonna be in the place where they're developing the cutting edge robotic technology I'm so excited, and then I got here and all the robots are broken, or they're falling over and we're nowhere close to developing a robot, or an AI that can feel anything.

0:22:16 SC: Yeah.

0:22:19 KD: I do think that it's worth considering whether we need to treat robots a certain way because it might have an impact on our own behavior, but I don't... I feel like the whole robots deserve inherently rights because they have consciousness. That's a fun conversation to have in a bar over beer, but it's not really... I don't think we're gonna be there any time soon, it's not practical.

0:22:47 SC: Yeah, and I agree, and I think that most of what you are thinking about isn't even trying to get there, right? You're not trying to build the most human-conscious robot, or even talking to people who do, you're interested in how we deal with robots that are by complete agreement very mechanical in their insides, right? That there's no sense in which they have feelings?

0:23:10 KD: Absolutely, and we... They don't feel anything, but we feel for them, and that's the interesting thing.

0:23:17 SC: And so, what are the applications here? What good is a robot? What good is it knowing that we care about robots?

0:23:26 KD: There are a couple of interesting applications. There have been some very hopeful results in health and education with using robots. For example, for quite some time now researchers have been looking at how to engage autistic children with robots because they've noticed that kids on the spectrum will sometimes respond to robots in a way that we haven't really seen before. And the leading researchers in the field say that this is probably because a robot is... It's a very social thing that they treat as a social actor, but it doesn't come with all of the baggage that an adult or another child would have. And so...

0:24:07 SC: Very sympathetic.

0:24:09 KD: Yeah, yeah very sympathetic robots. Yeah, I agree with that. Maybe not just kids on the spectrum. [chuckle] But the interesting thing is not only will they engage with the robot, they will engage more with other people in the room as well, if you bring in a robot to play with them. And until now, it's been kind of these one-off studies in the lab, that they've done, but they just recently, last year, did a longer-term study in people's homes, where they put roots and kids' homes, that they interacted with for half an hour every day for a month, and their social skills went upward. In a way that just...

0:24:50 SC: Wow.

0:24:50 KD: On every measure that we care about for these kids, normally, this would take thousands and thousands of dollars worth of therapy to get those results, and they got it with having them interact with a robot. Now, the catch was that the skills decreased again when they remove the robot at the end of the study. But that shows that there's so much potential here. Another example I like to make is...

0:25:12 SC: Sorry, can I just ask, before that.

0:25:13 KD: Oh yeah.

0:25:14 SC: So, what kind of robot was it that they were interacting with?

0:25:19 KD: I think they were using the Jibo platform. Jibo was a consumer product for a few years but recently shut their doors, but it's still used as a research platform. So, I think they were using a Jibo.

0:25:31 SC: Was it something that looked like a human being, or looked like a pet, or looked like a fantasy creature?

0:25:34 KD: Oh yeah, the Jibo looks kind of... I think the closest thing would be like the Pixar lamp, so it's...

0:25:40 SC: Oh, okay, okay.

0:25:42 KD: It has a head that swivels and looks at you. And it has this animated dot in the middle of the head that... The head kinda looks like a fried egg, but it's very cute. It will move around, it really gives you the sense of interacting with a social being. It's actually really cleverly designed because when you try to make a robot too human-like or too close to something that people are intimately familiar with it really disappoints people's expectations 'cause they're expecting it to behave a certain way, whereas if you have this Pixar animated thing, people are much more likely to suspend disbelief.

0:26:23 SC: But also it's interesting because one might have guessed that they would want something that was sort of tactile, like furry, like a cat or a dog or something like that. But this is manifestly technological.

0:26:35 KD: It is. There are furry ones, which is the second example I was gonna talk about. I suspect they used Jibo for this study because, like I mentioned, robots are always broken around here and so it's very, very difficult to do a study in people's homes where you can't go and fix the robot and have that consistency that you need for a long-term study. And so they needed to use a product that was stable enough to do that with and I think Jibo was... That's why they chose Jibo.

0:27:05 SC: Sure. And so what was this other study?

0:27:08 KD: So the other, not necessarily a study, but the other robot that I think has a really promising application is the PARO, a baby seal robot. Have you see that one?

0:27:19 SC: No.

0:27:19 KD: So this one has been around for quite some time. They use it in nursing homes and with dementia patients. It's really cute.

0:27:27 SC: I might have seen it actually. Yeah, now that you've mentioned it.

0:27:29 KD: Yeah, and it's been on TV shows. It was on the show Master of None. It's gotten some fame. But it's super cute. It makes these little movements. It gives you the sense of nurturing this baby seal. This one's furry and soft to the touch. And it turns out to be really important for people who are in a situation where they're just being taken care of by others. That's their whole life now to be given the sense of nurturing something that's psychologically really valuable for them. And I know it sounds super creepy and people are like, "Oh, that's so weird and gross, that we're giving old people robots instead of human care." But what it really is, is a replacement for animal therapy, and it's been very promising. They've been able to use it instead of medication to calm distressed patients, and so it's really hard to argue that that's a bad thing, especially if you stop considering it a human replacement and start looking at it as animal therapy in a context where we can't use real animals for a lot of reasons.

0:28:31 SC: Yeah, I mean, I can't imagine people objecting this. I don't get it. Why would you object to this? It's a therapy. Is that sort of the dark side of this fact that we're reacting on the basis of our instincts, not our rationality, and people have a reaction that there is something intrinsically off-putting or inhuman about a robot compared to some other strategy?

0:28:55 KD: Well, I think people are just struggling with this. Like in kind of Western Judeo-Christian society, we have this really stark divide between things that are alive and things that aren't alive, and so our rational brains are like, "Oh, they're robots, they're machines. They're not alive." And yet our subconscious behavior towards robots is very much treating them like living things, and I think that's just very confusing to people and we haven't really sorted that out in our culture yet.

0:29:23 SC: Yeah. For the autistic kids, is there any more detailed explanation of what it is that... Where the benefit is coming from? Why is it that the autistic kids are becoming more social in the presence of this robot?

0:29:37 KD: I don't know. I'm not intimately familiar with that. I'm not sure that the researchers actually know, for example, why the robots worked so well with these kids, but it's certainly an opportunity to learn those things by studying this in more depth.

0:29:55 SC: Right. I mean, for the dementia patients or for the elderly, you offered an explanation which makes perfect sense to me. That there's something that you get value from, not just being taken care of, but you're taking care of something, right? You are nurturing that something that is valued. I suspect, having no expertise here whatsoever that a lot of... That we do a disservice to a lot of elderly people by removing all of their responsibilities and authority, right? Like just by trying to take care of them and put them in a very soft space where they can't do anything or be hurt, and giving them a little bit more autonomy would be better along many dimensions.

0:30:31 KD: Yeah, and I guess people don't like it because they say it's fake autonomy, like they're not actually nurturing something and so they don't think it's authentic, but I feel like the way that we treat people now... We stick them in front of a TV or we medicate them or like you said, we give them no autonomy. I think that this is so much better than what we're currently doing.

0:30:55 SC: And on the other side, there's the fact that like it or not, we're being surrounded by robots, right? Like forget about using them for therapy or whatever, whether we're working in factories or driving cars or in our kitchen, do we human beings have to adapt to a new sort of way of dealing with the world when so much of the world is robotic and able to move around?

0:31:17 KD: Well, yeah, of course. I mean that's also why I'm confused. There are some people who are like, "Oh, it's bad that we treat robots like living things. We need to educate this out of people." But I think that's just gonna be the new normal, really, because robots are moving into shared spaces right now. They've been behind factory walls and now they're coming into all of these new areas, like you said, into people's houses, into people's workplaces, into transportation systems, hospitals, the military. And I think we have to roll with it and try to think about whether there are any harms that could come from people's interactions with robots, whether we need to think about consumer protection issues, for example. But I really think we need to be leaning into the positive effects as well within the knowledge that this... I know this sounds like technological determinism, but it's true, these robots are coming.

0:32:16 SC: It's coming. No, I'm with you there. Yeah.

0:32:19 KD: You can't stop that.

0:32:19 SC: I mean, do you... You know much more about robots than I do. Do you know... Do you have a picture of what it's gonna be like 50 years from now in terms of how robotized our daily lives are gonna be?

0:32:28 KD: Fifty years from now? Oh, that's a tough one.

0:32:31 SC: You can change the number, but yeah.

0:32:32 KD: Like if you go 50 years back to think of all the changes, oh my goodness. Well, I do think that we are gonna see more robots in households and workplaces than we do right now. Like right now, we're at the very beginning of people like having a Roomba, having an Alexa. Recently some companies have closed their doors, that were trying to create household robots that do a little bit more than those two things. They tried and failed, but I think that they're just slightly before their time. I think we will have more and more robots in the household. They may not be Rosey from The Jetsons, they may be more of single task robots, but that's definitely gonna be a huge shift, I think. We are entering an era of human-robot interaction.

0:33:30 KD: Right. And it's interesting to me that we treat them... We keep saying anthropomorphic, but that's not exactly right, right? Because we don't necessarily treat them like people, but it's like animals, right? Whether or not they are explicitly made to look animal-like... So once we get not, like you say, Rosey from The Jetsons, but just super Roombas, it's gonna be inevitable that we make them more and more human in our minds.

0:33:56 KD: I think so. Well, I'm so glad you mentioned animals though, because I think one of the fallacies that we still have is that we constantly compare robots to humans and artificial intelligence to human intelligence and whether that's... In stock photo images, if you do a Google search for AI, you get all these human brains and whether that's talking about robots and job replacement I think we're still very much thinking of robots as like a recreating human abilities and human tasks whereas it's so much better to think of them as an animal equivalent. I'm doing research on a book on this right now that looks at all the analogies between our history of animal domestication and how we're integrating robotic technology now and in the future because it's a much better analogy given that AI has such a different skill set than we do and we could be partnering with it instead of trying to replace ourselves.

0:34:54 SC: Right, right, and that reminds me of... So I'll say something controversial here and I'll get comments I'm sure. But there's the issue of how we treat animals just as there's the issue of how we would treat robots, right? And I have a lot of vegetarian followers. I'm not a vegetarian myself but I get that this is an important issue and we should think about it and I try to be open-minded and listen to both sides. But one argument is that if you had... If you raised a pig, let's say from a little piglet and it became your friend and your pet, then you wouldn't want to kill it and eat it. And actually, I agree with that, and I don't think that it's a contradiction or I don't think it's a moral failing. I think that it's just the fact that we grow attached to things, animals, robots, people that we get to know. And the problem with killing it is not that there is some intrinsically bad thing about killing it, but it's that our feelings are hurt when that happens, and I think that's a perfectly sensible moral way to live.

0:35:57 KD: Wow, yeah, that is controversial Sean.

[laughter]

0:36:03 KD: I agree and disagree with you.

0:36:07 SC: Alright.

0:36:07 KD: So I think it's true that if you look at the history of animal rights and how we've treated animals throughout history and today, we are such hypocrites and we love to tell ourselves that we care, that animals have consciousness or that they suffer, but if you look at throughout history which animals we've protected and which animals we don't really care about, we don't really decide according to biological criteria. I grew up in Switzerland. People eat horse meat in Switzerland. In the States that would... People would never eat horses.

0:36:47 SC: Yeah.

0:36:48 KD: We have too much of an emotional connection to horses, but if you tell that to a European, the European is like, "Well, what's the difference between cows and horses? They're both delicious. Why not eat them both?" So culturally...

0:37:00 SC: Right. So I'm actually... I'm trying to argue for both sides. I wanna say the Europeans are completely right. What is the difference between horses and cows? But I wanna say, "Look, if as an American if the culture you grew up in says that you are turned off by the concept of eating horses, then don't eat them, right?" I completely agree that we are hypocritical in practice. I'm not in favor of pain and suffering and I think that we're terrible to animals especially farm animals and so forth, and I'm all in favor of attempts to be nicer to the animals that we do raise for livestock, but that's a different moral dilemma than whether or not it's okay to eat them if they are raised humanely.

0:37:46 KD: I mean... So you think that just because we are hypocrites, it's okay to eat them?

0:37:56 SC: No, I think it's okay to eat them. I think that the thing that is not okay is mistreatment, right?

0:38:02 KD: But you're okay killing them.

0:38:04 SC: Yeah. Yeah.

0:38:06 KD: So do you make sure that the meat you eat is humanely raised?

0:38:11 SC: Well, I prefer it, but I'm also a realist about what actions I can take will change the world. I would like laws that make it not okay to mistreat animals. That's like, it's the same thing with energy conservation, right? Like our water conservation, I'm in favor of social collective action to fix these problems not me trying to be individually virtuous.

0:38:31 KD: That's a good answer. It's a good answer. I have to say, though, I have too much of Paul Bloom's emotional empathy when it comes to animals. I've been a vegetarian since January.

0:38:42 SC: Oh, okay. Good for you.

0:38:43 KD: So... Yeah, yeah, yeah. I can't do it, you know? I was breastfeeding and realizing that animals nurse many animals that we also nurse their kids and I was just like, "I can't, can't eat them anymore." Just...

0:38:58 SC: I will... Yeah, look, it's a very emotional issue, and I think it's okay and it's in flux. I did see someone I was following on Twitter, I won't say who, tweeted something about how it's better for the planet if you eat more plants and less meat. And there were so many responses in the comments along the lines of, "Don't tell me how to live my life, I'm gonna have a big old steak." And just comments that were pictures of steak, and they were like, "I eat steak, this is not a thing." But people who eat meat are just as sensitive and defensive snowflakes as vegans or vegetarians are, so I don't think there's any moral high ground anywhere around here. [chuckle]

0:39:37 KD: Absolutely. I think they're defensive because they know there's no justification for eating meat.

0:39:41 SC: Alright, well that's possible.

0:39:43 KD: Deep down inside, the way that I know there's no justification for my carbon footprint and flying all over the place like I do.

0:39:50 SC: But you do, right.

0:39:51 KD: I do.

0:39:52 SC: Yeah, so again, and I do it too, and I know it's bad. I know that me stopping doesn't change anything, really. So again, I'm in favor of collective action to fix this problem. But anyway...

0:40:01 KD: I agree.

0:40:02 SC: Okay. I think that it's important because it relates back to the robots because you said, and I think it's correct, we're nowhere near making artificially intelligent conscious robots that we would have to afford the same kinds of protections to that we would people, but in the living kingdom, in animals or plants, there's a continuum of consciousness and feelings. And we have to draw that line somewhere and eventually it's gonna become a much harder question in the world of robots, I would think.

0:40:32 KD: Sean, I have a question.

0:40:34 SC: Yeah?

0:40:34 KD: Why do physicists always wanna talk about robot consciousness? Is that a physics thing? It's always the physicists.

0:40:43 SC: I don't think so, I actually don't wanna talk about it. It's one extreme, right? One extreme is the Roomba. Another extreme is artificial intelligence, and I'm just making the point that there is a continuum between them and we are changing. So it's actually, I think the philosophers like to talk about it more than physicists 'cause they will instantly leap to the thought experiment, whether or not it's anywhere close to technologically feasible.

0:41:05 KD: Oh yes, it's true. Philosophers also like this.

0:41:08 SC: Yeah.

0:41:10 KD: It's true. Well, as a philosophical discussion, I think it's absolutely warranted and relevant. I just, like I said, I don't think it's very practical to discuss if we're looking at the near and medium-term future, or as far as my eye can see because we're not gonna have machines that approach the consciousness of anything that we believe would warrant rights.

0:41:46 SC: Right. Yeah, of a mouse. We're not gonna come anywhere close to a mouse, right?

0:41:50 KD: Yeah, and we don't give the mice rights, either.

0:41:52 SC: Yeah. No, exactly, that's right. So, perfectly fair, and I do think it's important 'cause of what the implications of your work and what you're studying is much more about how humans deal with robots that we all agree are not conscious, so there's plenty to be learned there. We've given the examples of autistic children, elderly dementia patients, but then we have to start talking about the sex robots, people using robots for... Perfectly healthy, grown-up people using robots for other kinds of purposes, let us say. So do you study that, is that something you learn about?

0:42:30 KD: I don't study sex robots. I personally would have no problem with it, but it's very hard to do sex research in America. I don't know if you've tried. [chuckle]

0:42:43 SC: I don't have to do either sex or drugs research, but I've been told by friends who do, it's very, very difficult in both cases. We're very puritanical.

0:42:49 KD: Yeah, you can't raise money for it, you get ostracized by your colleagues.

0:42:54 SC: Yeah.

0:42:56 KD: Also, the sex robots, I think they also get a disproportionate amount of attention. Like you said, "We have to talk about the sex robots. Everyone wants to talk about the sex robots." I think there's some interesting issues when it comes to sex robots, and it does tie into some of my questions around, does the use of technology influence human behavior? But there's also a lot of kind of sensationalist headlines around the sex robots. We don't really have any yet.

0:43:26 SC: Well, that's right, we have sex dolls, I guess. The reason why I thought of it was because this issue that we don't need to make the robots look human in order to anthropomorphise them. I presume that for sex robots, it would be, since they don't really exist yet, but it would be beneficial to make them look human, or at least some exaggerated version of whatever someone wants.

0:43:52 KD: Oh yeah, that's the sex robot that you want? Because if you go into a sex shop and you look at the sex toys made for women, a lot of them don't really look like... There's dildos that look like dolphins. They don't look like penises.

0:44:06 SC: I see. So women don't want it to look like men.

[laughter]

0:44:10 KD: I don't know, I don't know. I'm just saying there's a lot of different form factors out there, but I think, jokes aside, you're right that the sex dolls or the sex "robots" that we are seeing on the market do resemble female bodies and specific types of female bodies, which also raises some issues around objectification of women, and whether... I don't believe that sex robots are any sort of replacement for human sexual relationships, but they might be something that people who don't have human sexual relationships use because they're better than nothing, or they might be something that supplement people's sexual relationships. So I'm not necessarily concerned about humans being replaced here, but I am a little bit concerned about the form factors and we just don't know whether people's behavior towards sex robots is something that is a healthy outlet, for example, for sexual behavior that we don't want performed among humans, for example, pedophilia, or whether it's something that might perpetuate and normalize that behavior and make them want more of it. We don't know and, like I said, it's pretty impossible to research, so.

0:45:47 SC: Well yeah, I guess that's what I was gonna say. So, you're right, we don't know and you could easily imagine it going either way, right, that having this kind of robot either is an outlet and therefore, keeps real people safe from harm, or it encourages it. And it's an empirical question, you should study it, but you're saying that it's just hard, as a practical matter, to actually study things like that.

0:46:06 KD: It is. I mean, in other countries, like Germany and Canada, they are doing some research on pedophilia, which you absolutely cannot even do in the United States for a lot of reasons, including legal reasons. But as an empirical question, it's also a very difficult one to get at. It's like the violence in video game debate, it's like the pornography debate. We have some research that tries to get at those questions but it's not very conclusive. It's very difficult to do these studies, but I do think that robots warrant reconsideration of these issues because of their physicality. I think that that's, it's so much more immersive and visceral to engage with a robot than with something on a screen.

0:46:52 SC: And violence is another example, which is probably analogous to sex in some sense, we could imagine that certain people of violent tendencies, and maybe that could be ameliorated if they could take it out on robots, right? Or it would just encourage them. And that's very analogous to the video game debate also, but is that something where we're able to do research a little bit easier than the sex question?

0:47:19 KD: I mean, the research is... Well, anything is easier to do research on than sex in America, but that is, it is a very difficult question. So you know, some of the research that's been done on human-robot interaction is starting to look at connections between people's tendencies for empathy and how they treat robots. Like my research, but also other people, are looking at this but the question of, does interacting with robots change people's empathy is a more difficult one that we would like to get at and I'd like to think of ways to get at it. But it's going to take a while and that's unfortunate because we kind of need to know, sooner rather than later, as robotic design gets more and more life-like, we wanna know whether we need to be policing people's behavior, or having age restrictions on certain robots, or even having legal protections for certain robots that restrict what people can and can't do with them, in order to prevent people from becoming desensitized to certain behavior.

0:48:25 SC: Yeah, so that... Do you collaborate with psychologist, sociologist? Do they, are they studying these questions of how people's behavior can be changed by dealing with robots in different ways?

0:48:37 KD: You know, some people have reached out to me. I've stopped doing experimental research. I haven't done any for a few years, because I'm writing a book right now, but when I get back into it, I'm planning on getting in touch with people because, yeah, of course, this is very interdisciplinary work. You need people who understand the technology, people who understand the psychology, and that's another difficulty in academia. Even though, nowadays, we claim that interdisciplinarity is important, the institutional structures don't always support it that well. And I'm fortunate to be in a...

0:49:13 SC: Yeah, we know that, yeah.

0:49:14 KD: Well yeah. The Media Lab is pretty good at it, so I'm hoping that I can do some interdisciplinary work on this and collaborate with a few other people later on.

0:49:22 SC: Yeah, I mean, you mentioned, like, it's happening, like it or not. And as slow as academia is, the legal system is a whole another thing, right? I did have a podcast with Alta Charo, who is a bioethicist and law professor, and biology is an area where things are changing very rapidly, with gene editing and designer babies and so forth, and the law is always slow to catch up. Do you think that... How good a shape is the law in when it comes to our future robot interactions?

0:49:56 KD: Oh, terrible, it's terrible. And sometimes it's good that the law is so far behind. Like we would not have the internet, as we know it, if legislators had gotten their hands on it early. At the same time, now we're dealing with, you know, some consumer protection issues and privacy issues and other issues because the law is still behind, even though we've had the internet for a while. So it's a constant struggle, especially as the pace of technology really picks up, and biology is a great example. I mean, wow, [laughter] yeah, we need people who understand policy and law, and understand technology and biology, and the cutting edge of innovation, to be working together on this. That doesn't happen nearly enough. I'm part of this community of people who are trying to bring policymakers and roboticists and people who work in those fields together with, you know, some limited but some success. We have this conference that's happening every year, there's about to be a policy workshop in DC with a bunch of policymakers. But yeah, it's a constant struggle. The law is usually based on superstition or lobbying and...

0:51:16 SC: Yeah.

0:51:17 KD: And not based on evidence.

0:51:18 SC: It's also reactive, right? Like, some terrible thing happens, and then we pass a law against it, if it's a new kind of terrible thing.

0:51:25 KD: Oh yeah, and we need more evidence-based policy but, for that, we also need to have the data to have the evidence that we can point to. And in human-robot interaction, there's still some open questions that haven't been explored.

0:51:37 SC: I mean, what... So both, what are those open questions and what are the biggest legal issues in your mind? Like, what do you wish legislatures or, at least, Congressional staffers were worrying about?

0:51:48 KD: Oh well, okay, so do you mean, like, in my personal focus and work or do you mean generally in robotics?

0:51:56 SC: Whichever you wanna do first. [chuckle]

0:51:57 KD: Well, so my personal focus is really on the ways that people treat robots, like they're alive even though they know that they're just machines. So the thing that I really want policymakers to be aware of is, or in particular, consumer protection agencies, is the fact that, if you take the persuasive design that we've developed in the internet, you know, getting people to click on buttons because they're a specific color, using all these tricks from, like, casinos to get people to interact with their devices more, if you take that and apply it to a social robot, it's just putting it on steroids. You know, a social robot is gonna be so persuasive and so engaging to people and that plus capitalism is a recipe for some consumer protection issues that, I think, we will need to figure out sooner rather than later. So that's what I think but, you know, beyond that, there are many, many issues in robotics that, that really weren't legal consideration, from autonomous weapon systems to responsibility for harm to automation and the workforce. I mean, there are so many... Oh my gosh, algorithmic bias is a huge one, that fortunately has gotten a lot of attention recently, and there are a lot of great people working on that, but yeah, there's a lot going on that needs to be addressed.

0:53:21 SC: Actually, why don't you... Maybe not every listener is very familiar with that. What is the issue with algorithmic bias?

0:53:28 KD: So, we like to think of artificial intelligence as being more neutral in this decision-making than a person. And so, if you're a company and you wanna remove bias from your hiring process, for example, you might think it's a good idea to take an algorithm, to train it on what people have been successful in your company in the past and have it pick applicants, instead of having a human pick the applicants who might be swayed by certain names or who knows what. So, companies have actually done this. Amazon did this recently and got in trouble for it, because it turns out, if you train an algorithm on historical data, which is what we do with all the algorithms, it will incorporate a lot of biases. Like for example, the fact that you didn't use to hire a lot of women in your company. [chuckle] And so now, the algorithm is gonna be like, "Oh well, women haven't been successful in this company, so we're gonna weed all those out."

0:54:33 KD: So, that's a very simple example, but we are seeing these systems being deployed in areas like criminal justice, making decisions over whether people get out on bail or not, making decisions over whether will people get hired or fired, entire rating systems for people's professions have been automated in this way. And it's heartbreaking because, not only is the data usually biased, but we usually don't have any insight into how these decisions were made. A lot of these systems are developed by companies, it gets contracted out, it's in a black box, it's protected by intellectual property, or even the algorithm is so complex that humans can't understand how it came up with the decision. We just trust that it was a good one but there's a lot of problems with this, obviously. And data scientists, fortunately, are turning some of their attention towards this. And...

0:55:35 SC: So, basically, I mean, it's a garbage-in-garbage-out problem, right? If you train a computer on your biases, it will end up reflecting your biases.

0:55:44 KD: Oh yeah, of course, that's the problem with technology, in general. People... [chuckle] Oh my gosh, we have someone here at the lab, Joy Buolamwini, she does fantastic work. She's a woman of color and she was working on this, she's programming something that required facial recognition and she realized that the system couldn't recognize her face because her skin is black and it would only... So she had to wear a white mask so that the system would see her. And we've seen this over and over again, like photography, automatic faucets in the bathroom, facial recognition. Technology is built by, largely, young affluent white men who don't think to incorporate different types of skin color, as just one example. And it's not their fault, everyone has blind spots based on their life experience, but we need more diversity in the teams that are building technology. We need to be aware of the biases and the data that we're training these systems with. And currently, I think we're a little bit too... We trust technology a little bit too much to be neutral.

0:56:55 SC: Yeah, and that goes back to something you mentioned in passing that I wanna come back to about responsibility and where we put it. Things are being done in our world, increasingly by automated systems, by robots, by computer programs. And sometimes things will go wrong, sometimes it's inevitable, sometimes it could have been avoided, but we have a system, when human beings do something and it goes wrong, for blaming them and holding them accountable and holding them responsible. And what do we do when the thing that went wrong is an automated system rather than a biological one.

0:57:32 KD: Well, it seems like an unsolvable problem. We do have some things that we can draw on throughout history to look at this. Now, of course, this isn't gonna help in cases where we have a black box problem, which is why the EU, for example, is looking at trying to have transparency in algorithmic decision-making so that you can, at least, see what happened in the decision-making process. But when it comes to just these autonomous entities making a decision and causing harm, we can look at animals. We do have a history of assigning responsibility when animals have caused harm in the past.

0:58:12 KD: I think of this, in particular, when we think about robots and automated weapons systems and things where robots are causing physical harm, we've dealt with this issue throughout history ever since the first laws known to human kind. What happens if my ox wanders off and gores someone in the street? Who's responsible for that? And we've had different solutions to that. Like, it's not like we found the one golden solution. It depends a little bit on culture and other things, but we can look at that history and try and draw from it a little bit. I was just recently talking to someone at a conference, who is involved in some... He's a roboticist, works on autonomous vehicles, and he was talking to some law makers in England about how to regulate autonomous vehicles and they were like, "Oh, oh well, you know, we have this law of the horse from the 17th century that perfectly applies here." And so they started drawing on that. So, it's not as new a problem as people think.

0:59:08 SC: Right, okay, yeah, actually that does make sense and again, it fits in with the, we can learn something about how to deal with robots by studying how we deal with animals, more broadly, right?

0:59:17 KD: I mean, I personally love that analogy. It doesn't hold for everything, of course, but it's pretty cool.

0:59:23 SC: Well, before I forget, you mentioned the usefulness of these robots to autistic children or elderly people with dementia. I wanna think about the usefulness of robots in a similar way. Specifically, intentionally anthropomorphise robots to grown-up regular people, or the majority of people who we think about, will we be having increasing robot nannies or robot school teachers, any time soon? Robot nurses?

0:59:55 KD: Well, whether we will or not, is a different question from whether or not we should. I, personally, think it's so boring to just try and create robots that do something that a human can do. I understand that, sometimes, there's a need for that. For example, for a dangerous job that we don't want people putting themselves in danger for, yeah, we might want a robot to automate that, but robot nannies or like, people taking on a function that relies a lot on human social factors, I think that would be better done by humans, honestly. If you look at...

1:00:38 SC: It's not where robots have the comparative advantage, right?

1:00:41 KD: No, absolutely not and people argue, "Oh well, there are certain countries where we have over-aging populations and we need robots to take, because we don't have enough people to do care-taking jobs." For example. But sometimes the problem there is very strict immigration policies and general issues in society that maybe aren't best solved by throwing robots at the problem. Like, maybe we need to have a broader conversation about what it means, for example, for blue-collar workers who've lost their job to not want to become a nurse because it's too feminine, in their minds. We have a lot of jobs in caretaking in the US where people are needed and, at the same time, we have people complaining that all of the jobs are getting lost and it's there's a lot of culture and identity and other conversations tied up in that that we might wanna be having rather than turning to technological solutionism.

1:01:40 SC: I guess, yeah okay, it's a good point because I think that one of the first things that people, I think we've touched on this already. First things people think of when they imagine the robotic future is, robots that are exactly like humans, that are conscious and look like human beings. And even if we put that aside and say, "Well, the robots don't need to be like humans. They can be different and have different things." You're saying that we still look at the functions filled by human beings, and say, "What if we filled them by robots?" And we might wanna say, "Well, what if they're just completely different functions that robots are much better at and we should be looking at those."

1:02:14 KD: Yeah, I think there's so much more potential in thinking outside of the box here, and thinking about, "Okay, what are robots good at and how can we use that to make us more productive or have us do our jobs better?" I think that there's way more potential in thinking like that than in thinking of how do we recreate human ability?

1:02:37 SC: Yeah, right, in Pasadena, right near Caltech where I work, there is, I don't know if it's the world's first, but there is a burger joint that has a robotic burger flipper, right? [chuckle] It flips the spatula over the burger, flips it, you can watch it, they sell it as a tourist attraction. And I was never quite sure why that was an improvement over anything.

1:03:00 KD: I mean there's some novelty effect there, I guess.

1:03:03 SC: There's some novelty effect. But, so okay, to close then, I'll just give you an opportunity to prognosticate a little bit. Maybe if not you're directly predicting the future, give us a little bit of a hint, for those of us who are not following the latest robot news, what we should keep in mind or keep an eye out for in the robotic future. Not just building robots but the legal and moral and personal issues that you care about.

1:03:31 KD: Well, I think, one of the things that people really have trouble wrapping their minds around, but as people have more robots in their lives are getting more comfortable with, is this idea that we do treat robots like living things, even though we know that they're just machines. And so having that awareness, I think, it helps you not be as surprised by it, and helps lessen the confusion that we're seeing in our society at least. But another thing that I really wanna point out is that, the way that artificial intelligence currently is being built and learned is by collecting massive amounts of data and feeding it to the machines. And we now have the processing power to be able to do something with that data and there's some really cool things, but a lot of data collection, I think could be problematic for people's privacy interests.

1:04:26 KD: And that's something that people don't think about a lot, and even I see it in myself, like, we have an Alexa at home. It's so practical to be able to order diapers, hands-free, when you have a toddler, and so we don't mind having a microphone in our home that's listening to everything because we've traded our privacy for that functionality. And so, we don't have an incentive to curb that, the companies don't have an incentive to curb that, so we need legislation to have our backs because I think, privacy is really important, maybe not for me, but for the poor families who get targeted with advertisements trying to sell them scammy loans or education programs. There's so many examples of how people get exploited by all of this data collection and I think we need to be aware of that.

1:05:22 SC: Yeah, I kind of have a fatalistic attitude towards this. I suspect that privacy is just gonna go away. I don't think it should. I think we should try to protect it but people don't seem that interested in trying to protect it versus the pressure to give it up and the possible benefits they can get from giving it up.

1:05:41 KD: But like we have to fight, Sean.

1:05:44 SC: We do, we do.

1:05:45 KD: It doesn't impact you, maybe, as a very privileged white man, but it impacts so many people who are less fortunate, and they're just... I just, lately, have to fight this and I don't believe that privacy is dead. I believe that privacy is an entire spectrum and that if we just let this unbridled capitalism roll over us, they're gonna find newer and newer ways to just exploit people and so we need to, at least, try to slow it down a little.

1:06:18 SC: All right, I will take that as a good optimistic message that we should, even if it's a pessimistic message in the sense that there's a huge problem facing us, it's an optimistic one that we can fight and do something about it.

1:06:28 KD: I think so.

1:06:29 SC: All right, Kate Darling, thanks so much for being on the podcast.

1:06:31 KD: Thanks, Sean.

[music]

7 thoughts on “Episode 46: Kate Darling on Our Connections with Robots”

  1. Jeffrey Clarke

    If you think courting advertisers will not change the content, you’re using a tiny fraction of your genius.

    “Love what you’re doing… Maybe go a little softer on the whole cosmology thing. Don’t want to offend the Ark Museum crowd. There’s an increase in listener number and, of course income, if we can court the 6000 year old universe bunch…

    It will happen, and you now it.

  2. Um, Sean, the problem with killing a pig that you have raised is that you will recognize the pig as a conscious creature, with a personality. It’s not a thing, and you would realize that it was not a thing. (BTW, I’m a big fan of your podcast.)

  3. Gianpaolo De Biase

    Please Sean, don’t go into ads! There are other much better ways to make your beloved money, and reach a larger user base.
    In regards to this podcast, I was once again shiningly reminded how mentality in USA diverged so much from the European (in the good, bad and mostly neutral).
    Also, I don’t think we are that far: silicon consciousness will emerge.

  4. Fátima Pereira

    Nossos cérebros atribuem significados, dotamos esses objetos de emoções, sentimentos!
    Finalidades no âmbito_terapeutas, trabalhos rotineiros, que envolvam riscos, sim, é uma mais valia!
    Obrigada, Sean, e, obrigada Kate Darling!

  5. Really enjoyed this podcast as I do all of them. It’s nice to hear topics discussed from opposing viewpoints, top marks for diplomatically getting out of the vegetarian section of the topic, lol.

    As for the advertising, I’m cool with it, I mean you are entitled to do as you please and nothing is free. I listen to many podcasts that are ad supported and don’t mind it at all.

    Thanks for creating this brilliant podcast, always look forward to the variety of topics discussed.

  6. Darwins Stepchildren

    This episode demonstrates how careful one needs to be when making conclusions based on any animal behaviour, including Homo sapiens. Near the beginning of the interview Kate Darling discusses an experiment that was performed with humans and robots. She goes onto to state that within the experiment outlined, the human’s reluctance to take a hatchet and machete to a robot is due to the human “anthropomorphizing” the robot and the human having feelings towards the robot. Scientists stress the importance of repeatable experiments. I agree, repeat the robot experiment that Kate Darling outlined and the repeatable result of the humans involved in the experiment not wanting to go psycho on the machine will be repeated; however, and this is VERY important, from the experimental results, this is all ANYONE can conclude. The humans involved were reluctant to destroy the machine. Attributing a reason or reasons why the humans were reluctant without a ton (years worth) of added research, experimentation and results is egotistical, bad science and just plain WRONG. Many reasons exist why someone will not behave a certain way. Many reasons exist why someone will resist acting in a manor that is new or foreign to them. One could perform an experiment that includes humans and machines that are not robotic in any way, and the same result would be shown. Take a group of humans, you could even take the exact same group of humans that were involved in Kate Darling’s robot experiment outlined in this programme, and then ask these humans to take an ax and machete and go ape-shi* on an old air conditioner, or rusted train wheel. These humans would more than likely have the exact same reluctance as they did with the robot experiment. That doesn’t mean that they have feelings for the old air conditioner or rusted train wheel. Many humans, especially in the west, where both Sean Carol and Kate Darling reside, as well as myself and the majority of listeners to Sean’s podcast, live in a stable democratic society. Meaning that both individual and large groups (public) of Homo sapiens within these societies, especially as adults, will never act wild and unruly. Because the majority of Homo sapiens in the west do not very often, if ever, intentionally act with abandon and try to destroy anything on mere feelings of hatred or some other synonymous word. When asked to take an ax and machete and simply use all of your hatred and strength to recklessly destroy something on malice alone is met with reluctance, usually indicates that this individual has never acted or felt this way, and forcefully trying to act and feel this way feels foreign, there is a feeling of reluctance to do so. The object which would receive the malice essentially being irrelevant, meaning that their reluctance to forcefully attack and destroy a stuffed animal, an air conditioner, or a robot would be met in the same way. Meaning that the reluctance to destroy the robot met by the experimenters has far more to do with the subject’s own experiences and feelings towards themselves, has more to do with the subject’s desire NOT TO USE reckless abandon because it is totally foreign to them, than it has to do with the subject’s feelings towards the object, in this case, the robot. I agree, I do not have enough evidence to completely conclude that it has more to do with western individuals’ desire to not use reckless abandon than their feelings towards robots, and a lot more research would need to be performed to make an informed, scientific conclusion about their behaviour. But, saying their behaviour is due to their feelings towards the robot(s) based on current experimental knowledge is completely unfounded and baseless.

  7. Darwins Stepchildren

    Your (Sean Carol’s) answer to advertising is simple and can be summed up using just one of your episodes. Episode 43: Matthew Luczy on the Pleasures of Wine. In this episode Matthew Luczy was able to speak his mind and tell the truth (at least to him) about Wine and The Pleasures of Wine. He made many statements about the differences between North American and European wine drinkers, makers and savourers. He mentioned the uselessness of the Champagne flute. He mentioned how ridiculous it is for someone in North America to try and drink Champagne and/or White Wine directly from the refrigerator at 42 degrees Fahrenheit. These are but a few comments from that episode that would have needed to be removed had you had sponsors. Whether directly, or indirectly, these comments would have bothered your sponsors. This episode was almost two hours. I said the question was easy. I wasn’t lying. Is having to be conscious of every single word yourself or a guest says and having to edit an episode like episode 43 down to 30 or 40 minutes in order to be complacent with your sponsors outweighed by the money you would make from sponsors? If the answer is yes, then choose to have sponsors on your podcast. If the answer is no, then choose to continue your podcast sponsor-free.

Comments are closed.

Scroll to Top