187 | Andrew Leigh on the Politics of Looming Disasters

photo by Hilary Wardhaugh

We're pretty well-calibrated when it comes to dealing with common, everyday-level setbacks. But our brains aren't naturally equipped for dealing with unlikely but world-catastrophic disasters. Yet such threats are real, both natural and human-induced. We need to collectively get better at anticipating and preparing for them, at the level of political action. Andrew Leigh is an academic and author who now serves in the Parliament of Australia. We discuss how to move the conversation about existential risks from the ivory tower to implementation in real policies.

Support Mindscape on Patreon.

Andrew Leigh received his Ph.D. in Public Policy from the Kennedy School of Government at Harvard University. He is a member of the Australian House of Representatives representing Fenner. He was previously a professor of economics at Australian National University, and has served as Shadow Assistant Minister for Treasury and Charities. His recent book is What's the Worst That Could Happen? Existential Risk and Extreme Politics.

0:00:00.0 Sean Carroll: Hello everyone, welcome to The Mindscape Podcast. I'm your host, Sean Caroll. You know, when I think about guests for the podcast, I have rules, I have hidden rules in my brain about what kind of person I want for the podcast and what kind I don't. And I don't always make these explicit because I retain the rights to change the rules, they're my own rules, I made them up. One of the rules is no politicians. No working politicians or people who are currently running for office. And it's not that I have anything against politics, in fact, I'm extremely in favour of politics done right. Of course, politics can be done wrong and it can be very tiresome and people's IQs can go down when they start talking about politics, but I'm a huge believer that politics is important, that we need to invest both our intellectual and organizational energies in making politics work and getting the right people into the right places and thinking carefully about what that means, who the right people are, what the right places are.

0:01:01.5 SC: But I also recognize that there's a difference between the activity of politics and the activity of understanding the world, there's an activism intellectual divide. Both are important, but for Mindscape, I'm much more interested in the intellectual side of things. Even if we have musicians or whatever on the podcast, it's about the craft, it's about how you think about these things, it's not about people's personal histories and gossip. It's also not about individual policy proposals that people are trying to get through, or individual candidates who are trying to be elected into office. So rough rule: No politicians on the podcast. And I'm violating that rule today, and again, it's my rule, I made it up so I'm allowed to violate it, but today's guest is Andrew Leigh, who is a Member of Parliament in Australia.

0:01:52.7 SC: And I'll explain why I violated the rules. For one thing, Andrew comes from a kind of academic-y background. He has a PhD in public policy, he was a professor of economics at Australian National University, and he has a new book out just a few months ago. And the new book is called, What's the Worst That Could Happen? Existential Risk and Extreme Politics. It's a book about existential risks, which we've talked about before in the podcast with people like Martin Rees and Nick Bostrom, but in the down-to-Earth political context, what do we really do about these existential risks? The book is published by MIT Press. Most politicians' books are not published by MIT Press. So given the topic of the book and Andrew's background and the fact that it's a serious intellectual book, I thought would make a good choice for the podcast, and that's what we're talking about.

0:02:45.8 SC: We're talking about existential risk. So we go through the usual list of options, there's climate change, bioterrorism, asteroid hitting the Earth, and we try to think about both what these risks are and also what we should do about them at a fairly down-to-Earth level, what governments do, what can people do, etcetera. And for better or for worse, we end up, or at least in my mind, the interesting part of the conversation was really not any policy proposal so much as a perspective proposal, that people need to individually think about the world and where the world is going in a slightly different way if we are to take these kinds of existential risks seriously. I think it's a pretty easy case to make, that we should take them seriously. We can argue about the margins, about the details about exactly how much resource or time or effort we should put into mitigating existential risks, but we should be able to agree that something that has a non-trivial chance of ending human life or even causing enormous disastrous consequences for most human beings, that's something we should worry about even if it's gonna happen after our individual lifetimes, even it's gonna happen 10 or 200 years down in the future.

0:04:06.3 SC: So that's what we do. We take these seriously. Andrew explains how he didn't come into office in parliament being the existential risk guy, he has been thinking about it since arriving, and he cares now about things like super intelligent AI in a way that he didn't before. So as with anyone, you can agree or disagree about his particular diagnoses, but it's food for thought. It's making us think about these big things that are... Look, I'll be honest, some of them are kind of scary, some of them are very real. One of Andrew's points is this is not a one in 100,000 chance, there is a much bigger chance than we might hope that there really is gonna be something disastrous that happens in the foreseeable future. So let's think about it, that's the first step to preventing it, that's where we are for this podcast, so let's go.

[music]

0:05:13.1 SC: Andrew Leigh, welcome to Mindscape podcast.

0:05:15.1 Andrew Leigh: Thanks, Sean. Glad to be with You.

0:05:16.9 SC: We're talking about existential risks, so that's something... A phrase I'm sure people have heard before, why not just so we're grounded just so we're starting on the same page, why don't you just give us a list of some of your favourite existential risks, what are your top three to sort of worry about when you're worried about these things?

0:05:34.1 AL: Well, Sean, existential risks, I think, are those things which would either end humanity or fundamentally alter the trajectory of the human project. I guess those I'm most concerned about would be nuclear war, bioterrorism and artificial intelligence gone on awry. But you can think of other possibilities, unchecked climate change, even an asteroid hitting the Earth, which is featured in the new Netflix film, Don't Look Up. So there's a plethora of ways things could go wrong, none of them probable, but the possibilities are big enough that I think it's worth us investing a bit more time and energy in making sure that they don't happen.

0:06:17.1 SC: I guess one question is, there is a distinction between existential risk in the sense of truly ending all of humanity or even all of life on Earth, versus causing incredible disruption that will be terrible without actually making us extinct. I don't really think of climate change as something that could literally end humanity on Earth, although it could cause tremendous disruption and poverty and hardship and so forth. Do you think that's an important distinction to make?

0:06:46.4 AL: Absolutely, because the distinction is one between future generations not existing and future generations having a diminished quality of life or the human project taking a couple of centuries to get back on track. And I think climate change is a great example, Sean, because it is the one where I was most hesitant initially to include it in a list of catastrophic risks, and eventually it was the work of the Harvard scholar, Martin Weitzman, who passed away a number of years ago, that persuaded me it was worth including. And he makes the case that the odds of a six degree climate rise might be one in 10, the odds of a 10 degree Celsius climate rise might be one in a 100. And if you're talking about a 10-degree rise, you really are, you're talking about something which takes us into the realms of catastrophe.

0:07:38.3 SC: Could you actually... I know this is not the primary topic here, but can you explain exactly how catastrophic it would be? I mean, presumably all the ice sheets melt and the water level rises, but also there's other things going on. You must have thought about this.

0:07:53.3 AL: Absolutely. You see a big rise in violence, which is strongly correlated with heat. You see huge loss of crops and potential global famines taking place. Naturally, a range of coastal cities are immediately wiped out by the sea-level rise, but the degree of catastrophic weather events would be utterly unprecedented. There's a number of scientists who point to Venus, which millions of years back had an atmosphere not that different from the Earth, but due to climate change is now completely uninhabitable, it's the hottest planet in the solar system, 460 degrees Celsius on the surface. And as a result of climate change, Venus became utterly uninhabitable. So it's that possibility of the second Venus scenario, if you like, that caused me to put climate change in the catastrophic risk basket.

0:08:50.5 SC: Well, let's see, there's a big difference between 10 degrees and a few hundred degrees, I just... I wasn't actually planning on delving into the details here, but are there atmospheric science mechanisms that will let us go totally haywire and become sort of an almost Venus-like planet?

0:09:08.8 AL: Well, if you go to the particular Venus example, Sean, they had a runaway greenhouse effect, evaporating water led to a steam blanket, which warmed the planet even further. Then the water vapor broke into hydrogen and oxygen, and the hydrogen's literally swept away by solar winds. The process took place three and a half billion years ago, and talk about 10 million years to get rid of the water on Venus. Again, we're talking about very low probabilities, but given how catastrophic it is, it seems to strengthen the case for climate... For action on sensible climate policies that avert temperature rises.

0:09:51.8 SC: No. Okay, I think that's very good. Very helpful. So that's like a millions of years kind of thing... And this is the one place you are allowed to talk about a tiny percentage but over millions of years, and we'll speculate about the future in that way, and no, we won't hold you accountable for it, we're not gonna say that you predicted we'll become Venus, don't worry. But the worry about 10-degree temperature increase over 100 years, I guess there's a feeling that some people might have that climate change is somebody else's problem. Even if it's a problem, if someone is already relatively well off, if they don't live near the coast, they probably have a feeling that they'll get through it okay. And I think that's what you're pointing at is not gonna be as easy as maybe people think.

0:10:40.0 AL: Yes, that's right. And climate change is the classic collective action problem, one in which it is very easy to take the global litterbug approach, you know, if I just throw my trash on the sidewalk, then, really, that's not gonna make a big difference to how messy the streets are, but if everyone does it it really does make a big difference. So just as we should pick up our trash, we should also have an interest in common pool solutions. And you're seeing more of that movement with the Paris Climate Talks and Glasgow talks, but again, we're a long way off where we need to be.

0:11:15.3 SC: Right. So let me ask you, with that on board, let's back up a little bit, let me ask you, has your way of thinking about issues like this changed since you've become a working politician? You started out as an academic, now you can actually have a vote in some ways that professors don't. Did you learn more? Did you realize it's more important than you thought, or is it more or less what you expected?

0:11:39.5 AL: Yeah, it's a great question because I went into politics in 2010, I was not largely concerned with catastrophic risk. Climate change was on my radar, but certainly unchecked artificial intelligence wasn't. And it was thinking in a probabilistic sense and envisaging what it could be to end the human project that really brought catastrophic risk to do my attention. Toby Ord's estimate is that there is a one in six chance that the human project ends in the next century. And if that's true and that continues to be true over the next millennium, then you can think of it as playing repeated rounds of Russian Roulette for a millennium. One in chance that you did in a century, but a five in six chance you did in a millennium. So that then made me feel that this was an issue which was getting much too little attention, and particularly as a politician, I guess the lens I bring to the discussion around catastrophic risk is thinking about how populism makes the problem worse by causing a focus on the short-term rather than the long-term, firing up the temperature, destroying institutions, and undermining international cooperation. So that's my particular lens on catastrophic risk.

0:12:58.2 SC: When you first became a politician, when you started talking to your colleagues... I guess it's a parliamentary system in Australia, right?

0:13:06.7 AL: Yes.

0:13:07.8 SC: Then did they just roll their eyes or do people in that position get it. Is there a common feeling that, with different degrees of urgency perhaps, but that we should worry about this kind of thing?

0:13:20.2 AL: The risks have sort of ebbed and flowed. When I was a kid in the 1980s, I can remember having a conversation with one of my schoolmates, we must have been in grade four or something, we both agreed that there was no chance that we would finish high school because the world would be destroyed by a nuclear catastrophe. And that was incredibly salient at that time. In recent years, I think people have taken on the notion that pandemics can be pretty serious, and climate change has also come to the fore. But unchecked artificial intelligence, which on my estimate is the most worrying catastrophic risk, is still regarded by many as being in the realm of science fiction. So the individual risks have been taken on sporadically by the political establishment, but less so the notion of catastrophic risk as a whole.

0:14:14.1 SC: I think that makes sense, that's what I would have guessed. So it's interesting to hear. And do you feel that when you talk to them about it, they're willing to listen, or is it... I can imagine that as a working politician, they're like, "How does this get me votes back home? This is not something where people's attention is really focused."

0:14:31.6 AL: Yeah, although, it's interesting, right? So you take that moment in 1998 when the Bruce Willis film, Armageddon, and the Steven Spielberg movie, Deep Impact, come out. Immediately, there is a response, NASA sets up its Planetary Defense Coordination Office, you have Ted Cruz famously asking in a hearing, "What steps do we have to be taking so we don't have to rely on sending Bruce Willis to space to save humanity?" and big increases in spending on planetary defence and tracking near Earth objects. In some sense, politics has tackled that one, it's response to asteroids isn't perfect, but we've responded in a pretty bipartisan way in most countries around the world. And yet climate change is the opposite. Here, people are deeply, deeply divided, and there hasn't been the same sort of unified response, at least in the United States and Australia, that we've seen to asteroid risk.

0:15:35.1 SC: You're really saying something provocative by giving credit to two movies, two Hollywood blockbusters, for political action on asteroids. Is that a lesson right there? A lesson for people who do care about things that maybe we should be trying harder to leverage popular culture to really get people worked up about this?

0:15:54.8 AL: Oh, well, yeah, I think that's probably the exception rather than the rule, Sean, because there have been so many movies about catastrophe, like it's one of Hollywood's favourite films. So you think about the pandemic movies: Outbreak, Carriers, Contagion; the bioterrorism movies: 12 Monkeys, V for Vendetta; the nuclear war movies: Dr. Strangelove, On the Beach; artificial intelligence movies: Avengers: Age of Ultron, Terminator; and even the climate change movies, so Waterworld, Mad Max: Fury Road, Blade Runner 2049. Hollywood has got us to the edge of our seats, but that hasn't always gotten people off the couch to deal with catastrophic risk, and you need to bring it together and I think also to paint picture, Sean, which says it is not that hard to deal with these issues. The analogy I often draw is about buying insurance for your home burning down, it's not a probable event, but you spend a very small share of your annual income taking out home insurance because it would be catastrophic for household if it happened.

0:17:03.3 SC: But you also raised... That's very helpful, actually. I think it's a very good way of putting it, it's not enough to have a big scary movie, that has to be like right place, right time, etcetera. But this idea that an issue like this, which you would think would be the paradigmatic example of bipartisan opposition, like you should be bipartisan-ly opposed to the end of humanity, I think, but in fact, in various examples, the idea of taking action on these has been leveraged for partisan purposes one way or the other. I mean, I worry... So since you're a working politician, tell me if I'm too worried about this, but I worry that politicians have become too good at turning any issue into a partisan issue, and the fact that one side will be for doing something automatically makes it suspect to another side, and you can get people excited about that.

0:18:06.7 AL: Yeah, I often think about that Ronald Reagan line that he once cited in his dotage of saying, "If only we had aliens coming to Earth, we could get the Russians, the Americans, to unite around a common enemy." And maybe that's why things worked in the case of asteroid strikes, 'cause it was the closest to what Reagan envisaged. But when it comes to nuclear disarmament, we're a long way away from that, and it is a strong partisan divide between hawks and doves, less of a recognition that reducing the nuclear arsenal would just reduce the chance of the mistake beyond anything else, as well as measures like taking missiles off hair-trigger alert and allowing callback systems within missiles. Now, all of these tend to be opposed by some of the hawks in the military establishment even although you've had Republicans like Henry Kissinger from time to time coming out and saying, "Look, reducing the size of the nuclear stockpile would make America safer."

0:19:10.6 SC: Alright, I'm getting... I don't know what to think about these things. I do have these pessimistic moments when we start talking about these things. But let me back up and be more philosophical about it for a second. You mentioned this number one-sixth that, Toby Ord, the philosopher I think, calculates as an estimate. So first question is, where does that come from? How in the world do we come up with a number like one-sixth, one chance in six within a century?

0:19:39.3 AL: Yeah, so one in six is Toby's estimate of putting together the total natural risks: Asteroids, supervocalnoes, stellar explosions; and then the anthropogenic risks which are much bigger: Nuclear war, climate catastrophe, pandemics which occur so-called naturally, pandemics that are engineered, out of line artificial intelligence, and I would to Toby's list, widespread authoritarianism enabled by surveillance technologies. So you put all of those together and you get a risk that's in the ball park of one in six, and just to give you a kind of a benchmark for what that means, that's about the chance that if you're 90 years old, you'll die within a year.

0:20:29.5 SC: But... Okay, I think as a good Bayesian, I'm entirely on board with the idea that we should be thinking about these probabilities, but when it comes to something like the probability of nuclear war, I'm just at a loss of how to actually do it. I think I would have given a very different answer in 1970, 1980, 1990 and 2000. I mean, how do we get confidence in these numbers that we try to cook up?

0:20:54.8 AL: Yeah, these are heavily back of the envelope numbers. What we're doing here is envisaging both the probability of the event multiplied by the chance that if it happened it would cause catastrophe, so the research around nuclear winters is important because you wanna envisage not only what's the probability that nuclear weapons are launched, but in the event that happens, how does that then affect the planet overall. We don't have a good sense of either of those, but they are the two things we're multiplying together.

0:21:33.5 SC: Yeah, but I guess in the case of nuclear war, in particular, do you have any more details of where that number comes from? I mean, it must be someone's estimate of the chances that some politician is gonna do something terrible.

0:21:48.4 AL: Yeah, the chance of politician will do something terrible, or the chance that there's just a mistake in being made within the system. So given that you've got tens of thousands of nuclear weapons on hair-trigger alert, you wanna envisage problems like the one that we had in the Cuban missile crisis, where you had a... A Soviet submarine had depth charges being dropped near it, thinking that it was under attack, almost firing its nuclear missiles by mistake. So the chance of an accidental misfire then leading into a cascade is what you envisage. But yeah, the best judgments by experts, they've got huge margins of error around them, what we're trying to do really is get a sense of... A rough orders of magnitude rather than really nail down the last decimal point of the probability. The point is it's bigger than you'd think.

[chuckle]

0:22:48.4 SC: Yeah, okay, good. I like that answer very much. It's of the order 10 to the minus one, not 10 to the minus five. And that's the important distinction. Right?

0:22:56.9 AL: Precisely.

0:22:57.9 SC: And so this... But there is some, again, philosophical questions here, we're giving utility... We're being consequentialist in some way, we're weighing the value of our future lives. And this is always an interesting philosophical question, I know that economists like to discount the future a little bit because in part we're not there yet, in part 'cause we don't know, so is there some discounting that goes in, like if you were really doing a cost-benefit analysis about how much risk we should tolerate versus future damage?

0:23:35.6 AL: So I love that you've gone to the utilitarianism point 'cause I think it is super important. If we, massively, if we just count the future at the regular discount rate that you use, say, 5% a year, then you get the results that future people aren't worth very much. Indeed, if you discount at a rate of 5%, then you get the result that Christopher Columbus is worth more than all the eight billion people currently alive today, or similarly, that your life is more valuable than 8 billion lives in 500 years time. And now, if you think that's absurd, then you should probably... You're having a problem with the notion of discounting human life, and I do too. I don't think future lives have any less value than current lives, and therefore that we should put massive weight on the not millions, not billions, but trillions of lives that will occupy this planet in the billion years before the Sun engulfs us, and the potential that if we get things wrong, those people never exist.

0:24:45.1 SC: I want to be on your side about this, and I think I am on your side about this, but I'm nevertheless gonna push back because I don't think I could defend this with any rigor myself, because there's just so much we don't know. I mean, sure, billions of people in the future are valuable and we don't want to harm them unnecessarily or even expose them to risk if we can avoid it. On the other hand, like how do we know that what we do now can't be fixed with something we do pretty easily 10 years from now and the billions of people in the future are fine? So even though we care about them, knowing what to do for their benefit seems very hard.

0:25:22.5 AL: Yeah, there's certainly... There's uncertainty and there's the possibility that we come up with another solution. But in my view, that's different from discounting Sean. It's not that those people are less valuable, but you might say they have equal value yet I'm choosing to take a different set of actions in anticipation of change technologies that might emerge. That's a reasonable approach in my view. But I share the view of philosopher Will MacAskill, who says that there is the idea of saying that we're more valuable than future generations, is almost a form of prejudice that seems to be on par with racism and sexism. He calls it presentism, the idea that we're putting our inherent moral value above those of people who live in the future. It's presentism to say that Christopher Columbus is worth more than people today, it's presentism to say that we are worth more than people in hundreds of years time. We should value their existences, which let's hope will be far more pleasurable than ours. They should be able to live lives of greater meaning and enjoyment, duration and health than people alive today, and so we ought to be protecting them.

0:26:46.4 SC: I certainly do wanna protect them, but I also would like to develop a non-utilitarian justification for doing so that... It's exactly in these cases where you're multiplying a tiny number, a tiny amount of risk by billions of people being affected that I think the utilitarianism is on shakiest ground. Are there other justifications for taking dramatic action right now, just more, I don't know, deontological or virtue ethicist sort of reasons, like it's just the right thing to do?

0:27:15.7 AL: Yeah, you can think about these as your descendants if you like, if you have kids, an argument which probably is less, less immediately tractable to people without kids, but if you love your kids, you'll love your grandkids. There's no reason you shouldn't love your great, great grandkids who you'll probably never meet, or your great, great, great, great, great grandkids who you'll certainly never meet. That sort of moral obligation to one's genetic line argument could well be powerful with some. And it might just be the case that you want to envisage what it would be like at that moment at which the world is snuffed out. And going back to that Netflix flick, Don't Look Up, that moment of annihilation is one that ought to be on your minds when you're thinking, Would we spend a very small share of the world's resources insignificantly reducing the chance of catastrophe? Why not?

0:28:17.9 SC: Yeah, no, I think that I liked Don't Look Up, but I presume that you actually saw the movie, right?

0:28:23.6 AL: Yes.

0:28:25.4 SC: And it's gotten a lot of controversy because film people don't like it, like as a movie, there's complaints about characterization and plot and whatever, but of course as a scientist or politician, you're like, "No, this is kind of touching a nerve in a very important way." But one of the parts I didn't find realistic is the choice of disaster, an asteroid which is coming to destroy the Earth, and we know exactly when it's going to happen. I think that for things like climate change, there's a big difference because it's gradual, right? I mean, it's coming, but it's sort of creeping up on us and there's no threshold, there's no date to which, "Oh, this is when climate becomes bad." Does that kind of difference in jeopardy make it harder to make sensible policies about it?

0:29:09.8 AL: Yeah, absolutely. Don't Look Up has a silliness about it, which makes it great theatre, but not necessarily a great depiction of reality, and the uncertainties in climate modelling are real and have been exploited by those who wanna continue to make money under the current system. Likewise, artificial intelligence, we've seen attempts by those who are working in the current system to say, "Well, if you're in favour of dealing of reducing artificial intelligence risks, you must be against artificial intelligence," and that's, I think, a silly position, but you certainly see it cropping up from time to time.

0:29:54.5 SC: Okay, good, I think I got my philosophical itch scratched a little bit there. Let's get down to more brass tacks here. Let's just, if it's okay with you, walk through some of the biggest existential risks, 'cause they're all different, right? They all demand slightly different responses. So maybe we can start with the natural ones, there's a set of things like volcanoes, we're having this discussion a couple days after a big volcano in the Pacific Ocean that was big enough to be very noticeable without quite being existentially worrisome, but it reminds us. Volcanoes, asteroids even. I don't know if there's something that qualifies as an earthquake or a solar flare that would be truly disastrous. So how do we plan for these things that are kind of random, but really catastrophic when they happen?

0:30:44.9 AL: Well, in the case of asteroids, we've done pretty well with the Planetary Defense Office. In the case of supervocalnoes, it's harder. We seem to be particularly poor at predicting geological events. And supervocalnoes are... They're planet changing, the last one was Indonesia's Toba supervolcano 74,000 years ago, and that is an event which could cause a complete global crop failures, massive livestock deaths, huge disease spread and so on. So, I'm sorry, the last one's the New Zealand volcano, 26,000 years ago. So tens of thousands of years since we had a supervolcano, but we're poor at predicting them. So the odds that one comes along in the next century are low, but it would be great if we could better predict them.

0:31:42.4 SC: Right. And do you care... Do you worry about solar flares very much? I once had a very scary conversation with a lawyer who was on some committee to study this, and he said, yeah, once every thousand years there will be a solar flares big enough to knock out our entire electrical grid on Earth and millions of people will die. And we are doing nothing to prevent something like that, well, we can't prevent it, but to harden the grid, to be able to sustain something like that.

0:32:09.6 AL: Yeah, I've got them in the overall category of unknown and non-anthropogenic risks. So there's certainly a range of things that could happen. The fact that we have been around on the Earth for a couple of hundred thousand years without being wiped out by one of these things suggests that the odds in the next century are relatively low. But again, the better we can do at forecasting what's coming to us from out of the atmosphere, the better and safer we'll be.

0:32:42.6 SC: Well, but I think... I don't wanna add another worry to your list of existential risks, but I think this is the real worry with the solar flares is that its real target is the electrical grid. And so we never had an electrical grid before 100 years ago, right? So they could happen all the time on the scale of centuries, and there could be a 1% chance per year, which could be completely disastrous and we just wouldn't know.

0:33:07.7 AL: Yes, although it's hard to see how it wipes out humanity, if that's the case. Given that our ancestors managed to get through without anything that we can see in the fossil record as looking catastrophic, it seems to fall in the category of bad but not catastrophic.

0:33:26.2 SC: I think that's fair, although I personally, my lifestyle would be impacted if I didn't have electricity for a month or two, so I don't want it to happen, but...

0:33:33.1 AL: We certainly wouldn't be having this conversation now.

0:33:35.3 SC: I know, I wouldn't have a podcast, that'd be terrible. But I guess it's a paradigm, both the asteroids and the volcanoes and the solar flares, these are all paradigms for this issue of let's calculate a rate of risk per year, rather than climate change, which we see happening it's just a matter of how much it happens, but where there's a rate of something happening pretty quickly, how do you, as a policy maker, decide how much money to spend? Is there literally a set of rational utilitarians somewhere that are saying, "Okay, we need to spend this much money per year to prevent this risk happening with a certain percentage chance?"

0:34:13.1 AL: The economist would love it if that was the case, Sean, but of course, as you... You will know this is not the way in which the system works. The risk mitigation is better for the natural events, but we need to calibrate the probabilities a little bit more precisely and try and get a better alignment of spending to risk. One of the values, I think, of the conversation around the value of a statistical life, which was controversial when it was first proposed, is that it did cause safety spending to be better targeted at those things where the additional money saved the most lives. And perhaps in the same way, this discussion about catastrophic risk probabilities might help tilt funding towards dealing with the most likely dangers.

0:35:06.2 SC: I like that. So this is sort of effective altruism, effective charity for the world, for humanity as a whole, just by thinking about it and talking about it, maybe we will allocate our resources a little bit more rationally, is that the hope?

0:35:20.5 AL: Yeah, that's right. I, like you, are a big fan of givewell.org, and one of the points that GiveWell make is that the difference between effective and ineffective charities isn't just two or three times, but potentially 100 or 1000 times efficacy. So likewise, when we're looking at these catastrophic risks, we've got risks such as an asteroid impact, which is over the next 100 years, probably a one in a million probability, then you've got an engineer... The chance that a bioterrorism event knocks out the world's population. That's a one in 30. So these are very different probabilities, and yet we're probably not putting enough resources into making laboratories safer and making sure that terrorists don't put their hands on material that could be used to engineer the next pandemic.

0:36:17.6 SC: Well, okay, good. Let's move on to the topic then of pandemics and bioterrorism, it's both natural pandemics and made pandemics, I guess, and presumably a different suite of responses or mitigation strategies are necessary for those.

0:36:34.6 AL: Yeah, absolutely. And so, tracking zoonotic diseases has received considerable attention as COVID swept the world, and so we need to do better in terms of those natural pandemics, as well as taking steps such as working out how to reduce the risk of spread at so-called wet markets. But the biggest threat in my view is terrorists getting their hands on what the Nixon administration once called a poor man atomic bomb, and for that, we might think about strengthening the Biological Weapons Convention, which currently has a monitoring budget smaller than the budget of the typical McDonald's restaurant, and making sure that there is better controls over so-called Gain-of-Function Research in which researchers at respectable institutions look at how they're able to make bad bugs worse. There's an argument for doing that research, but the notion that we should just publish it, publish it, and allow everyone to have access to these sorts of findings is in my view pretty dangerous.

0:37:50.8 SC: Well, what is the current status of exactly that? Can a biologist just publish whatever results they get along these lines or are they somehow restricted in a sort of classified knowledge kind of way?

0:38:04.9 AL: Yeah, so let's take one of the recent examples. There was a team of researchers at the University of Alberta in Canada, who showed it was possible to make horsepox cousin, a cousin to smallpox, by ordering parts of DNA on the Internet and reassembling them. They showed they could do it for about $100,000 in about six months. They submitted those findings to Science and the general Science said, "No, we don't think that publishing this... The scientific merit outweighs so-called Dual Use Research of Concern, which was the Science editor's way of saying bad people getting a hold of the finds. But then the researchers simply sent the paper to another journal which then published it. So that's an issue that the scientific community is wrestling with at the moment, and having overall standards that cover not only journals but researchers themselves is gonna be pretty important. And I'd like it if there was as much attention being put into this as the considerable amount of attention that's being put in countries like the US and Australia to researchers who are collaborating with people in China.

0:39:16.9 SC: Good. But there is a hugely important fact lurking in the background of what you just said, which is that it sounds pretty easy to do bio-terrorism, even if... Maybe not me in my garage, but a halfway decent science lab might be able to just cook it up no matter what restrictions I put on you, if I'm trying to be in secret, is that a worry? Just sort of a random individual actor with a little bit of resource, let's give a million dollars to do some biology, can they really make a bug that would hurt a lot of people.

0:39:53.3 AL: Potentially. Genetic engineering is moving very fast, it's possible to print DNA and to, for example, upload a sequence and have the DNA of that shipped to you for very low costs, and desktop printers are increasingly going to become available. So there has been an interesting proposal, MIT's Kevin Esvelt has made a proposal that any of these DNA printing outfits should have built into the machinery a system that checks essential segments of risky sequences, which then makes it difficult for people to print bugs such as the 1918 influenza strain.

0:40:43.3 SC: Yeah. So now my imagination is running wild here, so I don't know why I hadn't thought about this before, but we're in the middle of a pandemic now with COVID, and in many ways, it's like a warning pandemic, even though it's been extremely terrible and deadly, it could have been enormously worse. It has a relatively long incubation period, but it's not that fatal, and if you wanted to design a bug that would... A bug in the casual sense of virus that would be very, very deadly, could you design it to lurk inside people without any effects for months and then turn on and become extremely fatal? That's the worry, and once you're at the level of designing DNA, then why not?

0:41:27.5 AL: Yeah, that's the risk. So you've got this spectrum that researchers talk about of deadliness and contagiousness, and typically viruses tend to be one or the other. Very deadly, so you think about untreated HIV; extremely contagious, you think about measles or malaria. But there's relatively few diseases, thankfully, which are both extremely contagious and extremely deadly. And so the risk that you have a pathogen that ticks both those boxes is what people are worried about with natural pandemics, but all more so with bio-engineered pandemics.

0:42:08.1 SC: Yeah, a naturally occurring pandemic, the virus has... Its vested interests are not to instantly kill everyone it infects, right? 'Cause it wants to be passed on...

0:42:15.9 AL: Yes, exactly.

0:42:16.6 SC: But there's no such constraint on artificial ones. This is what I've just now realized in talking to you, that you could imagine designing a virus that is way more deadly than anything that we would imagine naturally occurring.

0:42:35.4 AL: Yes, yes. And the rapid advances in general technology, they have huge potential in terms of disease alleviation and saving lives around the planet. We just need to make sure, Sean, that as we put those innovations in place, we're not inadvertently assisting those who would look to make the pandemic equivalent of a dirty bomb.

0:43:02.0 SC: I guess the tiny mitigating factor is that you can't point this [0:43:06.0] ____ of a dirty bomb very precisely. Like everyone... You would be in danger yourself of being adversely affected if you just set loose a terrible new disease on the world.

0:43:20.2 AL: Yes, that's right. And past attempts haven't proven successful. You think about the Aum Shinrikyo sect spreading anthrax in Tokyo subways, they did initially try and find some disease that they could unleash, but they weren't able to do it. So they used to... Sorry, they used sarin gas in the end. And that was because their attempts to bioengineer a disease hadn't worked out. Similarly, there was a set of poisonings in Oregon that took place in 1984. That's actually the worst biological terror attack in American history, which involved the poisoning of salad bars, that group had previously looked at spreading HIV/AIDS, but hadn't worked out how to weaponize it. So these terrorist groups have tried and failed in the past. The challenge is to ensure that they keep on failing in the future.

0:44:27.9 SC: Maybe this gets into things that you're not allowed to talk about, but I always presume that if there's some technology that could be used for better or for ill, it will be developed and it will be used by somebody. So I presume that governments in secret are developing bio-weapons, even if they plan to never use that. Is government development of these kinds of things just as big a worry as terrorist group development?

0:44:57.6 AL: Look, I don't think so, in advanced countries at least. And the decision that the Nixon administration made in putting in place the Biological Weapons Convention was that these are fundamentally weapons that are more useful to less powerful states, and that it's strongly in America's interest to not be involved in working on bugs as a form of weaponry. So the US had a range of programs which it then shut down under the Nixon administration, following the Biological Weapons Convention. We've had some evidence that the shut down by the Soviets wasn't as complete, and certainly there's been other incidents such as Saddam Hussein's use of chemical weapons on his own people. But largely, I think it is the case that in advanced countries, there's not a secret research going on into biological weapons because of the recognition that these are, as Nixon said, a poor man's atomic bomb.

0:46:05.0 SC: Yeah. Okay, that does make some sense. I'm not quite sure 'cause I never know what countries are doing in secret, but I think that the motivations that you mentioned do make a lot of sense. For the terrorist groups or just for the mad scientist or whatever, the fact that we can buy our own genetic kits, DNA writers and so forth, you mentioned that it would be sensible to at least imagine restrictions on either the technology or the publication of the technology, how specific are proposals along those lines to not let people just buy a DNA engineering kit or not publish the results if they figure out how to re-engineer smallpox?

0:46:46.5 AL: Yeah, I think Kevin Esvelt's proposal is pretty specific and makes a lot of sense, and that ought to apply not only to firms that are shipping DNA, but also to benchtop DNA synthesis machines. That then becomes ensuring that people can't do it at home. The concern around research publication, I think, has crystallized into a number of quite sensible proposals. It runs counter to the ethos that you and I are so familiar with in universities that publish or perish the idea that when you've got new findings, you share them with the world. And so I can understand the discomfort that people have about keeping research silent, but in this case, I think it's strongly in the interest of humanity.

0:47:39.0 SC: Okay. But are there like bills in front of parliament or so forth, or how advanced is this effort to think this through?

0:47:46.9 AL: No, it's still in the realm of concrete policy proposals, but I certainly hope that it'll come in... It'll crystallize into clear codes of practice and legislation in the coming years.

0:48:00.8 SC: Okay, and then just to wrap up the pandemics, there's still the naturally occurring pandemic that we have to worry about. I'm a little depressed as I think a lot of people are at how badly we, as a species, have responded to this particular pandemic. Do you think that we'll be better next time, are there obvious, right things to do, or do you think that, going forward, pandemics are gonna be political footballs, as we say here in the United States?

0:48:27.8 AL: Yeah, I certainly hope that we're gonna get better in terms of making sure that pandemics don't leak out of labs. There's a theory this one leaked out of a lab, I think that's probably wrong, but it's certainly the case that the last person to die of smallpox caught it from a lab leak rather than from naturally occurring smallpox. So we do need to be careful around lab leaks, and making sure that BSL-4 have better safeguards around them would make a lot of sense. We would also do more in terms of having detection facilities. One of the ways in which we picked up on COVID early was this program for monitoring emerging diseases, so-called ProMED, in which doctors were just posting findings that they found and ProMED was collating them together. ProMED's global budget is 1 million dollars a year, which is about what it costs to build a suburban playground. The idea that that's the best we can do as a planet seemed nuts to me. I'd be increasing ProMED's budget substantially because I think they're one of our best early detection weapons against pandemics, whether they're natural or...

0:49:41.1 SC: And what about things like... We did a pretty good job of developing a vaccine, we did a much less good job of making it widely available, especially worldwide, and we did a terrible job at convincing people to take it, I would argue. Do you see plans forward to doing better next time?

0:50:00.9 AL: Yeah. As a politician, I've been thinking a lot about how you get ahead of disinformation, and certainly a lot of the stuff I've read suggests that in some ways you wanna inoculate people against the hoaxes before they come, that once a hoax has taken root in people's mind, it's quite hard to dislodge it. So among indigenous Australian communities, for example, the hoax has got there, in many cases, before the government, with the idea that the vaccine was being given to those communities first because it was being tested on them. And once that idea had taken root in communities, it's become quite hard to vaccinate indigenous communities. So as the overall Australian vaccination rate is good, the rate in indigenous communities is bad, and that's just a microcosm of the overall challenge of dealing with disinformation. We've gotta get out there before the bad actors do, and warn people of the character of the disinformation messages that are to come. Look out for these stories, "This is what people will tell you, and this is why you shouldn't... "

0:51:05.7 SC: And is that something... This is an obvious answer to this question, presumably, we should be doing that before the pandemic hits, right? There should be an ongoing kind of program. I don't even know what it would look like though, to tell people who are naturally skeptical, "Someday there's gonna be a pandemic and we're gonna wanna vaccinate you. Please don't reject it."

0:51:27.4 AL: Yeah, right. Part of it is scientific literacy, part of it also is making sure that we're rigorously testing the anti-disinformation messages. So there's a couple of good papers I've been reading recently, randomized trials, just testing different strategies, because it is one of those areas, Sean, where your gut doesn't tell you very much, where in fact, we now know that repeating the hoax can cause it to sink deeper into people's minds, and that sometimes it's better not to mention the lie at all.

0:52:01.9 SC: Yeah, so this is research in psychology? What would you even call it? What is the research we have to do to understand this better?

0:52:09.0 AL: Yeah, it's sort of a blend. The researches that I've seen are a combination of public health and social psychology. And they come up with useful, useful findings, the notion of the fact sandwich comes out of that. So if you have to mention the lie, then start with the truth, mention the lie, mention the truth again, and so at least you're giving a double dose of truth for every time you mention the lie. That fact sandwich has come out of clever randomized trials on messaging. It's one part of dealing with this information, but it's gonna be increasingly important if populism continues to maintain the hold that's got in many advanced countries.

0:52:49.3 SC: Well, yeah, I think we will get there, but maybe it's okay to take things out of order, it's not just disinformation, right? There's a motivation, a political motivation on some sides to take the, let's say, the anti-vaccination stance or something equivalent, and that seems like a tougher nut to crack in terms of prevention.

0:53:08.5 AL: Yeah, that's right. If you wanna build a powerful support base then finding an issue such as a conspiracy theory can be really useful. You go back to the way in which the Nazis used conspiracy theories about Jewish people to fuel their rise, conspiracy theories about African-Americans have been used through by various American populists, you see in India the use of conspiracy theories about Muslims to fuel the of Hindutva movement, which has taken over the ruling BJP. That sort of weaponizing of conspiracy theories to target a small... To spread fear about a small group in the population is a tried and true tactic for populists around the world.

0:54:01.6 SC: Yes. And how do we stop that? [chuckle] To ask you completely unfair question.

0:54:08.1 AL: Yeah, I think we first of all need to call it out, and calling out racism turns out to be a pretty effective way of combating the political weaponization of racism. Recognizing that the rise of populism does in part have its roots in economic problems, the loss of good middle class jobs with the hollowing out of manufacturing has been a source that the populist anger has tapped into, and recognizing too that you can't fight fire with fire, and that if you're looking at an alternative political approach, then that needs to be the kind of calm stoic approach which characterizes leaders such as Nelson Mandela, rather than a sort of angry approach which doesn't seem to be effective against populism.

0:55:06.4 SC: Well, this is... And this is a big picture message of your book, I think... Correct me if I'm wrong or tweak it as you will, but the real lesson of thinking carefully about these existential risks is not just, "Oh, against risk number one, we do this. Against risk number two, we do that," but rather we really have to think hard about our whole approach to politics and even life to really adapt to the fact that we face existential risk now in a way that maybe we didn't previously in human history.

0:55:41.8 AL: Yeah, that's right. And strengthening democracy is an important way of reducing the hold that populism has gained on our politics. When I look at the US constitution, I regret that Jefferson's ambition of an update every generation has essentially been dropped for the last two generations, which means that you have a democracy that's not as democratic as it should be. So I've talked about a number of democratic reforms that I think would make sense in the United States context, you know, holding elections on weekends or holidays, reforming the electoral college, encouraging active citizenship, the kind of detailed community engagement rather than simply engaging on social media as a substitute for real political action. But we also need to realize that good politics involves acting with calmness and wisdom rather than trying to be populist at their own game.

0:56:52.4 SC: That's a great message. But it does... What you just said shifts my attention a little bit because you know a lot more about American politics than I do about Australian politics... Before I forget, can you just say a little bit about what it's like to be in this Anglo-American, English-speaking tradition but be in a different country, in Australia? Do you have to kind of keep up with political and social movements in both where you are and in the US just 'cause the US is so influential?

0:57:24.1 AL: Look, I'm married to an American, I spent four years of my life in the US, I was interested in American politics as the politics or any other country except my own. So it's a pleasure rather than a duty to follow American politics. But I do see so much in the American democratic experience that suggests that the beacon of democracy that America was two centuries ago is now shining less and less brightly. And I look at the changes that be involved as essentially part of that responsibility that democracies have to keep on improving their systems, not to sit back and say, "This is ideal, we'll never do anything more." I suppose the most radical proposal I would have is to treat voting as a civic duty, just as Americans are required to fill in the census, just as Americans are required to serve on juries, I think having compulsory voting would significantly improve turnout and make turnout more representative of the population as a whole.

0:58:40.8 SC: And am I right, that you have that in Australia or something like it?

0:58:44.5 AL: Yeah, we do, and we don't have a 100% turn up, but we have a much higher turnout than most advanced countries do, largely because... Not because people are fearful of being fined, but simply because the fine which is less than one hour's average wage is something which spurs a civic duty and makes people feel, "It's election day. I will go and vote," rather than, "Will I or won't I go to the polls today?

0:59:14.5 SC: But this goes back to this idea that we talked about very briefly before, that politicians have become very good at weaponizing potentially controversial issues. So even if one political party in the US got behind that, there's no chance the other one would. So I'm very skeptical that things like that are gonna actually happen.

0:59:33.9 AL: Right, right. And so this requires leaders who are willing to make decisions which are in the interests of the policy as a whole rather than simply in the interest of their narrow political party. We've got past examples of that, leaderships of both parties that have chosen to make decisions in the national interest. But like you, I worry that we're seeing less and less of it, and the increasingly tight hold that Donald Trump now exerts over the Republican Party is a real concern, and as is the fact that almost three quarters of Republican voters believe that Donald Trump won the 2020 election. I think while you're in that environment, it's quite hard to get the requisite changes that ensure that America's democratic ideals are realized.

1:00:28.8 SC: Well, I agree, and I also perceive that the diminution of a devotion to democracy does occur on both sides of the political spectrum. Even if it's one side's fault, I'm not trying to sort of both sides say everyone is at fault, etcetera, both sides here in the United States have lost any motivation for working along with the other side. And I don't care whose fault it is, that's a bad situation for a democracy to be in.

1:01:01.0 AL: Yeah, yeah, I agree. And one of the most influential books to me in recent years has been Eitan Hersh's book, Politics is for Power, which makes the case that increasingly, Americans are treating politics more like they treat their local sporting teams, cheer and jeer from the sidelines, but don't think that you can actually affect the game, rather than actually going out on the field and trying to make a difference. And Eitan talks about the importance of getting involved in your local community, and trying to think of yourself very much as a player in the political spectrum, and that also means that at a local level, because party labels are much less salient, you're much more likely to be working with people rather than just shouting at them.

1:01:51.2 SC: Interesting. I'll have to check that out. That's a good recommendation. But okay, this is probably the most important thing we're talking about, but I don't wanna forget that we're going down a list of possible existential risks 'cause they're all different. We've mentioned climate change a lot, but what is your take on the current amount of progress we're making? We keep trying to have international agreements, and it's hard to make them happen and then people violate them, it seems to be kind of a recipe for cynicism a little bit.

1:02:18.9 AL: Yeah, I'm certainly concerned that what came out of Glasgow is inadequate for what the planet needs. You look at Climate Action Tracker, which looks at the number of nations that have implemented climate policies consistent with a two degree of warming target, it finds only a handful have done so. Even the European Union, it says, are only coming close. Many countries are insufficient or critically insufficient, including yours and mine, so we do need to do an awful lot more. And what's striking about it is that a lot of it involves installing energy sources which is zero marginal cost, so ultimately a lot of this will pay for itself in cheaper energy.

1:03:02.6 SC: Yeah, it seems like there is good news on the technology front. I mean, solar and other renewables have gotten cheaper a lot faster than people thought they would.

1:03:11.1 AL: Absolutely, and so the gains from installing there are substantial. And what important too is then to think about this not just at a national level, but also at a community level. So if you've got a coal-fired power station which is slated for closure, then you can take advantage of the electricity connections coming into that plant, in order to use that as a site for a wind farm or a solar farm and create the... Have the construction jobs that go along with that at the same time, as well as some maintenance jobs in the future.

1:03:45.6 SC: I like that. That's a very clever idea. And then, of course, we have the final risk I wanna dwell on is the AI risk, you've mentioned that already. This is the one that you said you hadn't really been conscious of when you came into your job, but you've read up on it and now you're kind of worried?

1:04:02.7 AL: Yeah, absolutely. And again, we've got two questions. Will it happened and how bad will be if it happens? There's differences among artificial intelligence researchers of the point of time at which artificial intelligence will exceed human ability. The median guess in one recent survey was 2061, but almost no one working in the area says it's impossible, computers will never outperform humans in the sorts of tasks we envisage, whether that's writing a best-selling book or driving a truck or solving mathematical problems. So then once computers go past us, what happens? Well, presumably, they accelerate past us at a pretty rapid rate.

1:04:49.3 AL: So you look at the performance of chess computers. If you put a chess computer up against a human, the chess computer now wins 99% of the time, and in Go, the probability the machine wins is 99.99995%. So this is essentially saying that now computers playing these games are as likely to beat us as the world heavyweight boxing champion would be to beat me in a boxing match. So once they accelerate past us, what do we know about their values? Well, we hope that they share our values, but I don't think that's locked in by any means. And the possibility that super intelligent machines have a set of values that are either antithetical or more likely just orthogonal to it is a real one. And we need to be very careful how we develop these computers that they don't somehow damage prospects as a species quite substantially.

1:06:00.8 SC: I guess... I have some skepticism about the rate of progress of AI and truly human general intelligence kind of task, I think it's very different than Go or chess, but for the worries that you have, it doesn't have to be. And it doesn't have to be human-like intelligence, as long as we are seeding power in some way to these algorithms, we could get in trouble. So my question is, what exactly would be the scenario that we're worried about? So imagine that AI becomes very, very smart. What is it going to do that will harm us, how specific could we be about that?

1:06:39.0 AL: Short answer is, we don't know, but you can take the Nick Bostrom example of a supercomputer which decides that it wants to build as many paper clips as possible, it doesn't wanna hurt us, but our buildings and our cars turn out to be good raw materials for paper clip building, and so it massively destroys humanity's prospects and the result. Or you can imagine that we try and encode our values, but we do so the wrong way. So we say to the computer, "We want you to maximize human happiness," and it puts our brains in vats feeding drugs to maximize our pleasure centres. We say that we want it to find a cure for cancer, so it increases the rate of cancer detection... Increases the [1:07:24.5] ____ rate of cancer, so that it can improve detection.

1:07:28.9 AL: These sorts of problems called perverse instantiation or a King Midas problem do trouble artificial intelligence researchers, and suggests that we want to think about including computers which have three qualities, that they're observant, humble and altruistic. So we're not locking in a particular moral code, but we're asking these supercomputers to watch us, to act in our interests and to recognize that there might be a lot of complexities about human society, and they wanna be learning from that in order to help us. That all sounds great in principle, the trouble is if you've got an AI race, particularly one that's being conducted through the lens of global competition between super powers, then you might end up with the first super intelligence not sharing our values.

1:08:26.3 SC: I guess I have a lot of questions about this and I know this is not exactly your expertise, but let me say the dumbest, most naive question I have, probably the answers to this one are pretty easy, but if I have a algorithm running on my computer that just goes bad, breaks bad and starts doing bad things, I can unplug my computer. So I think that the scenarios that we're envisioning here are imagining not just an AI going bad, but some kind of embodied AI that is almost human-like in its capacities and so forth, and what I worry is that we're being too anthropocentric, We're imagining AIs that are kind of human-like, but the real danger will come from AIs that have very different capacities that we haven't really thought about enough.

1:09:14.9 AL: Yeah, so the reason we can't just switch it off on the wall I think is because if it's smarter than us, then it'll wanna self-improve to acquire resources and resist being shut down, which means it'll do everything it can to try and avert a situation in which you just turn it off. And there's a lovely analogy for this, Sean, there's an artificial intelligence agent which was designed to maximize its score in Tetris, that game where the bricks drop down.

1:09:41.8 SC: Yep, love it.

1:09:42.9 AL: Now, as you know, Tetris can't be won, because, ultimately, the last brick comes into place. And so this game had a strategy which involved getting to the last moment and then pausing the game. And many people have noticed that the behavior of that artificial intelligence agent is not that different from what you would envisage from a super intelligence which was resisting being turned off. So we've seen it already in the lab to some extent, and defying shutdowns is going to be one of the things that a super intelligence puts a lot of resources into achieving.

1:10:20.9 SC: Okay, I like that example very, very much, but I do think that it actually highlights one of the distinctions here, because I can imagine an AI that is way more intelligent than I am at almost everything, much better at not only chess and Go, but symphonies and fiction writing, but has no self-preservation instincts at all. It seems like the angle here is that we should worry about giving IA self-preservation instincts.

1:10:50.2 AL: I'm not sure that I can envisage an AI which has any substantial desire but doesn't care about being turned off. I would have thought pretty much any desire you begin with ought to then effectively encode a desire not to be turned off. Ultimately, this comes down to Asimov's... Whether Asimov's third law is necessary or unnecessary, his law is, "Don't injure humans, obey orders and protect yourself." Some people say protect yourself doesn't need to be in there because the other two are effectively encoded.

1:11:33.4 SC: Right. Yeah, but yeah. Okay, I do... Again, this is just vague worries, I haven't thought about this in any systematic way, but I worry that all of our experiences from biological organisms, they grew up through evolution rather than being designed, so for us, intelligence and self-preservation instincts just go hand in hand, it's very natural, but they needn't in the AI context, that's my vague worry, but I don't wanna dwell on it. What I do wanna dwell on, just to wrap things up with some final thoughts, more thoughts on the global strategy or the human scale strategy here. I mean, one thing that I was interested that you poo-pooed as a strategy is the Elon Musk idea of backing up the biosphere of if we spread human beings out to other planets, we're less likely to blow up ours and therefore... And humanity. And you didn't seem that much in favour of that one, or at least you weren't that impressed with that suggestion.

1:12:30.5 AL: Right. Largely it's because I'm a cost-benefit guy, and when I look at the cost of that strategy, it seems to be massive relative to the cost of strategies such as better coordinating AI races between existing teams. There's also the massive loss that would come from the destruction of planet Earth and all that we've built here. So I think we could do better in getting clear global guidelines on ethical AI. Interestingly, there was a proposal in 2018 from Justin Trudeau and Emmanuel Macron for an international panel on artificial intelligence, and they had the international panel on climate change as a model, but the Trump administration didn't support it. In part because they thought it would impede the development of artificial intelligence, to which I'd respond, well, it's only impeding the development of bad artificial intelligence, let's get the guard rails in place before we build the highway. If you build the super intelligence first and then try and think about its ethical rules things... You could find that you've left it too late.

1:13:42.7 SC: Okay, that does make sense, I like the cost-benefit angle there. But speaking of which, the final thing is this kind of vague utopian but nevertheless attractive idea you have of thinking differently about these kinds of questions, you talk... Your book ends talking about wisdom and stoicism, which is not how most politicians' book ends... Actually, maybe they do, I don't really know. I don't read many books by politicians these days, but how do we make people a wiser and more stoic? That sounds like a big global project that I wouldn't even know how to start doing that.

1:14:22.4 AL: Yeah, I'm not sure that I have a good strategy for populating stoicism. I suspect Ryan Holiday is probably better on that than me, but it is the philosophical approach that in my view is the right strategy to respond to populism, that idea that we need the values of courage, prudence, justice and moderation, that we should be rewarding people who are being bold and service of truth, that we should be is celebrating a love of wisdom, that we should be encouraging a fairness in the treatment of people, and also that living a calm and a disciplined life rather than a shouty or a chaotic existence is to be celebrated. And it's not as though we haven't seen examples of this rising to the top, you look at Marcus Aurelius and the life of Epictetus, not to mention the success of those such as Nelson Mandela not only in Robben Island, but in leading his own country. Ghandi, as well. It's a tradition quite a rich lineage and one that I believe is the right strategy in an age in which we wanna ensure that the population not only survives, but that a whole millions of future generations come after us.

1:16:00.4 SC: Okay, but how do we do it? I'm totally in favour of this. What I worry is that you list some virtues and everyone goes, "Yes, those are great virtues," but then you operationalize them and people go, "Oh no, I don't mean that. I didn't mean that we were gonna let immigrants in or whatever it is." How do you make that connection between values that we're all willing to get behind and acting them out in the way that we're hoping people do?

1:16:28.5 AL: Well, for me as a politician, it's about resisting the urge to go for the jugular and to make personal attacks that aren't necessary, as well as recognizing that when the temperature is turned up, it's generally not going to advantage those who care about the long term. So all of these things can be pretty tempting, and there's certainly plenty of those who say that left-wing populism is the answer the right-wing populism. So there's a sense in which this is a kind of personal project for any politician who wants to make a difference to reducing catastrophic risk whether they're on the left or the right, and you can identify plenty of those who've adhered to stoic values on the right over the years, but it's also something where you'd envisage celebrating a different kind of media engagement, for example. The shouting heads media engagement should be looked on as a spectacle of amusement rather than being a serious way of engaging in politics.

1:17:41.3 SC: Yeah, and maybe at the end of the day, the best thing we can do is be exemplars of these virtues that we want other people to have, I guess.

1:17:48.1 AL: Precisely.

1:17:48.2 SC: Alright, that's something to aim for. I like it. I like leaving the podcast with a goal or something for people to think about and try to get better at. So Andrew Leigh, thanks so much for being on the Mindscape podcast.

1:18:00.6 AL: It's been a treat, Sean. Thank you.

[music]

10 thoughts on “187 | Andrew Leigh on the Politics of Looming Disasters”

  1. USAID is underwriting a program that will enhance the ability of bad actors to create and deploy novel pandemic viruses. The program is known as Discovery & Exploration of Emerging Pathogens – Viral Zoonoses (DEEP VZN)(https://www.usaid.gov/news-information/press-releases/oct-5-2021-usaid-announces-new-125-million-project-detect-unknown-viruses) and has a $125 million budget. This well-intentioned program publishes not only the DNA/RNA sequence but also ranks the potential danger of the virus, making it easy for bad actors to find them. And it provides no significant improvement for developing vaccines for these threats. Kevin Esvelt at MIT has alerted us to the problem and Sam Harris has spread the word on his Making Sense podcast. Modifying this program to prevent it from becoming a valuable source for bioterriorists is still possible. Anyone in a position of influence in the government, and particularly in USAID would be useful to enlist in the effort of fixing this well-intentioned program before it inadvertently does incalculable damage.

  2. Odd seeing as Australian politician on Mindscape.

    Sean, sincere congrats on your new gig. Couldn’t think of a better combination. Very pleased for you.

  3. Congratulations on the new position at Hopkins! Sounds like a perfect fit for both parties; a real win-win.

  4. Couldn’t go on with this guest.
    He started comparing climate change on earth to that of Venus, to make his argument for anthropogenic GW. But Venus doesn’t have (and never did have) a magnetic field to protect its atmosphere and so the comparison is ridiculous. Sean as an astrophysicist I thought that’s the one thing you would have pointed out before he started elaborating.
    The trouble with poor information is that it becomes a hostage to fortune in what is a polarised debate. We need good quality information delivered in a measured way so that everyone listens to it and participates in future action. Instead many people hearing this sort of rhetoric switch off.

  5. Good Luck in your new gig! Love these podcasts.
    What a great podcast that covers the topic of ‘doomsplaining’, which had a high potential to bumming me out. This has the right 30,000 meter view that looks at problems to solve, a bit beyond the political, and actually motivates me to practical action.
    As the AI fears, I cannot see AI as anything but a tool in a radical capitalist competition. What if current AI is used to blunt small d democracy? We’re already there, used through the most popular platforms. I also see non-disclosure and fencing off the best AI to exclusive proprietary ends. Again, already there.
    Enjoyable podcasts. Especially loved repurposing expiring fossil fuel plants with clean energy. Thanks
    Don’t ever stop casting, I love what you haul in.

  6. Congratulations on your new position. Weird how happy I feel for you here on the other side of the world!!

  7. Andrew Leigh seems like the ideal politician. One who has a basic understanding of politics, science, philosophy, the environment, and a genuine concern for his fellow humans, both now and for generations yet to come. Regarding the plight to future generations, he states “I share the view of philosopher Will Macaskill, who says that there is the idea of saying that we’re more valuable than future generations, is almost a form of prejudice that seems to be on par with racism and sexism. He calls it presentism, the idea that we’re putting our inherent moral value above those of people who live in the future”. Crudely put “future generations aren’t going to get me elected, so why should I care about them?”

  8. Best thing about your pod is how you lowkey push back on your guest’s statements; that’s intellectual integrity

  9. Andrew Leigh clearly means well but has a particularly weak understanding of the limits of A1. Artificial General Intelligence (AGI) is nowhere near being achieved and may not be achievable even in principle. And Chess and Go playing computers are no threat to anyone except the egos of human players. Chess computers don’t care whether they win or lose and no other computers have any goals or desires or awareness of either. Computers aren’t conscious and don’t want to do anything (and they don’t even know it). Computers can dangerous because of the way humans can use them for their own selfish, greedy or harmful purposes. They collect our data so others can use it against us or for their own purposes. And they can make lethal drones. That’s the real worry.
    AGI itself is a pipe dream we can leave to future generations to disprove. The fact that Leigh quotes Nick Bostrum’s mad AGI paperclip maker as a theoretical threat shows he hasn’t thought seriously or deeply about AI.

    Leigh’s climate change worries are only slightly better expressed. Sean has the right idea on these points. Climate change can do severe harm but is not an immediate existential threat and Earth is not going to turn into Venus. There are many unknowns about the progression of Climate change. More importantly, Leigh is off base thinking we either should or are going to value unknown future lives that may or may not come into being hundreds of years from now on an equivalent value with our own. Humans value their own lives first and foremost, not those of future unborn generations.

    If that’s “presentism” thank goodness that most all of us are “presentists!”

  10. I keyed in on the notion that “artificial intelligence” can be defined as the off-loading of tasks that requires aspects of intelligence to a machine. For example, auto-pilot on airplanes – it’s not very intelligent but it takes over tasks normally executed by an intelligent agent. So it does have intelligence, of a sort. The AI of the Boeing 737 MAX killed 346 people, countermanding the explicit intent of the pilots. We’ll be too late thinking about these issues if we wait for AI to resemble HAL from 2001.

Comments are closed.

Scroll to Top