86 | Martin Rees on Threats to Humanity, Prospects for Posthumanity, and Life in the Universe

Anyone who has read histories of the Cold War, including the Cuban Missile Crisis and the 1983 nuclear false alarm, must be struck by how incredibly close humanity has come to wreaking incredible destruction on itself. Nuclear war was the first technology humans created that was truly capable of causing such harm, but the list of potential threats is growing, from artificial pandemics to runaway super-powerful artificial intelligence. In response, today's guest Martin Rees and others founded the Cambridge Centre for the Study of Existential Risk. We talk about what the major risks are, and how we can best reason about very tiny probabilities multiplied by truly awful consequences. In the second part of the episode we start talking about what humanity might become, as well as the prospect of life elsewhere in the universe, and that was so much fun that we just kept going.

Support Mindscape on Patreon.

Lord Martin Rees, Baron of Ludlow, received his Ph.D. in physics from University of Cambridge. He is currently Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge, as well as Astronomer Royal of the United Kingdom. He was formerly Master of Trinity College and President of the Royal Society. Among his many awards are the Heineman Prize for Astrophysics, the Gruber Prize in Cosmology, the Crafoord Prize, the Michael Faraday Prize, the Templeton Prize, the Isaac Newton Medal, the Dirac Medal, and the British Order of Merit. He is a co-founder of the Centre for the Study of Existential Risk.

0:00:01 Sean Carroll: Hello everyone, welcome to The Mindscape podcast. I'm your host, Sean Carroll and today we're going to have a thought-provoking if perhaps, slightly depressing episode or at least slightly putting us in the mode of worrying about really profound things. That is the concept of existential risks, or even lesser than that, catastrophic or extreme risks that we face as a species. So we all know that something happened in the 20th century, we gained the technological ability to really do enormous harm to ourselves, as a species. There was always times, back in history when human beings could harm each other but these days we can imagine human beings truly wreaking havoc on the whole planet or the whole species that's what we mean by extreme or catastrophic or existential risks. So today's guest is Martin Rees, that's Lord Rees, baron of blood low, he is officially a Lord in the British hierarchy, there, he actually sits in the House of Lords, and votes and so forth, but Martin is also one of the leading theoretical astrophysicist of our age. He's done enormously good work in High Energy Astrophysics, understanding black holes and galaxies and things like that.

0:01:20 SC: But over the last decade or two, he's gained an interest in these big questions of human life and where humanity is going towards the future. So he's one of the co-founders of the Center for the Study of existential risks, at Cambridge University. And that's mostly what we talk about today. There's a lot of risks out there. Of course, the first to really come on the scene was nuclear war, and that's still out there. And we talk about that one, but now there's all sorts of bio-threats, as well as cyber threats, as well as artificial intelligence, not to mention the sort of natural disasters of asteroids and solar flares and earthquakes that could cause enormous harm. It's a really interesting science problem but also a philosophical problem because you're asking how to judge an extremely unlikely event versus if that event happens, it will be catastrophic for everybody.

0:02:14 SC: So the classic example of this is, what if the Large Hadron Collider create a black hole and destroyed all the earth. Unlikely to happen, but do you really wanna risk it? This is a really, really good question. So we get into this, we get into how to deal with all these questions, but then being who we are, talking about life on Earth eventually turns into life on other planets. And actually in the last half of the discussion, we're talking about life elsewhere in the universe. Again, something that Martin is a world expert in, and we talk about why we haven't seen it yet, what forms it could take, what we can learn about life on Earth by thinking about life on other planets? So we start off with some down-to-earth depressing topics, but it's very optimistic and action-packed, there by the end. So tune in, you're gonna like this one. Let's go.

[music]

0:03:17 SC: Okay, Martin Rees, welcome to the Mindscape podcast.

0:03:20 Martin Rees: Good to be with you.

0:03:21 SC: So you've done a lot of work obviously on cosmology, high energy astrophysics and so forth, but a relatively recent interest of yours, has been the future of humanity, the risks that we face. And, so I thought we could start just with a worked example and we can think about how to think about this? So as someone who's done particle physics in quantum field theory, my favorite worked example is, what if the Large Hadron Collider created either a black hole or some exotic particle that then, ate up the earth? And I was surprised to learn by reading your books that you had actually thought about this, and you're one of the first people to really wonder about this.

0:04:00 MR: Well, I got the idea from someone else, but I thought it was a good worked example because obviously it's very very unlikely, but the consequence are catastrophic, that I think it's not stupid to query this and there were some people who of course did query this and they were sort of poo-pooed a bit by the people at CERN or Hadron, but although we think it's very, very unlikely. I think it does raise a non-trivial issue, because there was a committee set up at CERN to address this and they said with great confidence that on the basis of all theories, this wouldn't happen. But obviously, the level of confidence you want, is something like a billion or trillion to one, given the consequences and the reason I think it's not trivial to worry is that supposing you were up before an American Congressional Committee, to address this issue, and you said "Well the chances are a billion to one." And then they say to you, "Dr. Carroll are you really saying it is less than a one in a billion chance that you're wrong?" Would you say yes to that?

0:05:10 SC: Yes.

0:05:10 MR: And so obviously, the main risk is that our theory is completely wrong or something, totally unenvisioned.

0:05:16 SC: A systematic error.

0:05:17 MR: Which could happen when you explore a part of parameter space, that's never been explored by nature itself.

0:05:23 SC: Maybe for the benefit of the audience, could you explain a little bit about what it would be that could go wrong at the LHC that would cause us all to be killed?

0:05:31 MR: Well, I think... Yeah, I'm just quoting the other experts, one idea is a mini black hole, the other idea is turning material into what's called strangelets which would gradually sort of gobble up the Earth and turn it into some compact structure. And these have been speculated about by particle physicists, no one thinks they're likely. And here we're talking about this because when I have talked about in the past, they'd be newspaper headlines, they would start about things that the accelerator will destroy the world and I don't think that, but I do think that it's something which is quite right, that one should address. And I try to think what I would do if I was before the Congressional Committee. I think I would say that the chance is very, very small, but I would then say "Would you like to ask me what's the chance that's something we discovered in this accelerator, will solve the world's energy problems forever?" And I would say, "That's very, very small." But supposing that you thought that was a 1000 times less small than the chance of destroying anything, then that might tilt the balance.

0:06:44 SC: That's right. That's fair.

0:06:44 MR: And so that's the way I would explain why I, actually despite uncertainty about my theory, feel we shouldn't be inhibited with going ahead with these experiments.

0:06:55 SC: Well, okay. That's a very good point. What if that were not true? So, I think this is a good thing because I wanna get on the table what is the theory of thinking about these terrible, terrible things where the chance is very small, but the consequences are disastrous. So you're suggesting that one of the things to think about is other really, really small probabilities that might be good rather than bad.

0:07:16 MR: Yes. That's right, and of course it'd be never possible to quantify, but of course there's a risk in doing anything which may have potential upsides. And we're used to that in the testing of new drugs and everything like that. And so, when we get to these colossal risks then again the same argument applies that we might be forgoing benefits. As Freeman Dyson put it in one of his articles, as a hidden cost of saying, no.

0:07:45 SC: That's right. The example I used in my book, "The Particle at the End of the Universe," was, every time you open a jar of pasta sauce, there's a chance that a terrible mutation has happened inside, and you're gonna release a pathogen that destroys millions of people. It's a very, very small chance...

0:08:03 MR: Yes. Yes.

0:08:03 SC: But it never stops us from opening the jar.

0:08:05 MR: No, no, no. But of course, one does have to be cautious. And, if you think of something which could be globally catastrophic, then you get into philosophical questions about just how much worse an existential catastrophe is then a very bad one. And to express this, if you consider two scenarios, one is the death of 90% of the people in the world, scenario two is the death of a 100% of the people in the world, and you ask the question, "How much worse is scenario two than scenario one?" You could say, "Well, 10% worse, the body count's 10% higher." But some people would say, it's much, much, much worse, because there you destroy a potentiality of all future development and future lives. And it's fortunate that, if we think of all these scenarios that have been discussed, apart from these rather crazy ones from particle physics, it's very hard to imagine anything to wipe out all human beings.

0:09:04 SC: That's right, but... You were very careful in your formulation there saying some people would say that 100% is much, much worse than 90%. What is your feeling personally?

0:09:14 MR: Yes. Well I do think that's the case, because one thing which we learn as astronomers is that we humans, are not the culmination of evolution. It's taken 4 billion years for us to emerge from the primordial slime, as it were, but the time ahead, even for the Earth, is 6 billion years.

0:09:35 SC: Yeah.

0:09:36 MR: And the universe has a much longer future, and I'd like to quote Woody Allen, "That Eternity is very long, especially towards the end." So there's no reason to think that we're even a half-way stage in the emergent complexity. And of course, if intelligence is rare, and as some people think we are the only intelligent species in the galaxy, then our fate as humans is of galactic importance, not just terrestrial importance. Because, if we wipe ourselves out, that would destroy these potentialities. So, I think it is much worse to envisage wiping out everyone than having a catastrophe which is a setback to civilization.

0:10:18 SC: Yeah, there are definitely implications of these ideas for questions about life elsewhere and I do want to talk about that. But let's stay down here on Earth for just a minute. So, I think what you're getting at, and something that I thought of, as I was reading your books, is that we talk about existential risks or these terrible disasters, but there really is a hierarchy of them, right? There's the one in which a million people die, which is incredibly, terribly bad. One of which most human beings die, one of which all human beings die, one in which all life dies here on Earth. Right? And I think it's safe to say we can contemplate all of these. There are scenarios for any one of these to actually happen.

0:10:57 MR: Exceedingly rare, but they're all scenarios, yes.

0:11:00 SC: Right. That's right.

0:11:00 MR: And I think, incidentally, in my book, which is called, On the Future, I do discuss some of these scenarios. And one point I make is that, if we think of the intermediate-level scenarios, then they will all cascade globally in a way that didn't happen previously. You may know the book by Jared Diamond, called, Collapse, where he talks about the way I think, five different civilizations collapsed, but in none of those cases did the collapse spread globally. Whereas now, I think any really severe setback in any part of the world will have consequences, because we are so interconnected by telecommunications, supply chains, air travel, and everything else, so things would spread globally. And that's something new. And another thing, which worries me very much, is that I think society is much more fragile than in the past.

0:11:56 MR: To give an example, if you think back to the Middle Ages when the Black Death, bubonic plague, killed half the population of many towns, the rest went on fatalistically. Whereas now, if we had some pandemic, once the number of cases overwhelmed hospitals, there'd be a breakdown in social order. If people clamor for treatment that wasn't available. And, likewise, we're so dependent on electricity that, if there were a massive cyber attack, on, say, the East Coast of the United States, then there'd be a complete anarchy within a few days. Nothing would work. And, indeed, that is realized because I quote in my book a 2012 report from the American Department of Defense, which talks about that scenario and says that, if it were instigated by a hostile state, it would merit a nuclear response.

0:12:55 SC: That would be helpful. [chuckle]

0:12:57 MR: Yeah.

0:13:00 SC: It's clear to me that this is gonna be somewhat of a downer of a conversation. I should let people know that you're by nature a very optimistic person actually, right? I mean, you've a lot of optimism about technology.

0:13:08 MR: Cheer for this Brexit.

[laughter]

0:13:09 SC: It does good, okay. Yeah, so you mentioned the interconnectedness and how that allows... Potentially would allow disasters to spread and propagate, but at the same time there's the related fact that technology has given us a kind of leverage over our future for good and for bad that we didn't have a 100 years ago, right? The 20th century introduced for the first time, I would say, the possibility we could extinguish ourselves.

0:13:34 MR: Yes. Well, nuclear, initially of course. Yeah.

0:13:37 SC: That's right. So it's a different kind of question that we really haven't been trained to think about to use the number billions to one. Let's say, if the probability of wiping out the Earth and maybe you got that number by taking one over the population of the Earth.

[laughter]

0:13:54 SC: But is that the right calculus? Is there any quantitative way of thinking about how we should measure existential risk and how we should respond to it?

0:14:04 MR: Well, I don't think there was, I mean when I am in a storeroom, people say do you worry about asteroids, etcetera. And of course I don't worry much because they're the one risk you can quantify pretty accurately we now know when they'd arrive or how big they are, what the impacts would be, etcetera. But the point about asteroids is that the probability of impact is no higher now than the prehistoric times.

0:14:28 SC: Not our fault.

0:14:28 MR: And the concerns that we should have in the forefront of our mind are those which are emergent and which are probably increasing. Nuclear war was the first one of this kind with misuse of bio and cyber tech, new concerns, and I do think that there's not enough attention given to these. There's a complacency that if something has never happened, you don't worry about it, but if it's an event which is so serious that even one occurrence is too often, we ought to think about this. And I've been involved in setting up a center here in Cambridge to address these extreme risks; the argument being that there's a huge amount of effort going into moderate risks like carcinogens in food, making trains and playing safer and all that. But these improbable events which are perhaps growing in probability as time goes on can lead us into complacency and we really ought to think about how we can minimize these risks and what would we do if they happened, and I think in a center if we can reduce the probability of any of these things by one part in a thousand, then the stakes are so high if we lived more and earned our keep.

0:15:44 SC: That's right. I'm morbidly fascinated by solar flares which are certainly a natural phenomenon that has happened many times, but I met a lawyer who had worked on a commission where they thought about this and he's very concerned about the idea that once every thousand years, there's a solar flare that would be big enough to wipe out the entire electrical grid here on Earth. So it's a combination of a naturally occurring thing and a vulnerability that we have invented for it, but again the time scales are things that we are not adapted to dealing with.

0:16:17 MR: No, that's right, but that's an example of something where solar flares can affect satellites, for instance they could be a hardened to minimize that impact and it's worth doing, but you're right in saying that there are some natural phenomena whose consequence is greater now because we depend on electricity more. And of course the consequence of an earthquake in Tokyo is more now than it would have been 300 years ago because of the number of people involved and...

0:16:44 SC: Well, and in Los Angeles where I live.

0:16:46 MR: Of course. Yes, yes.

0:16:47 SC: The buildings won't fall down because the buildings are now cleverly constructed, but we will not have electricity or water for a long time if the earthquake hits in the right place and that's the real worry. You're safe here in Cambridge, right? No earthquakes. [chuckle]

0:17:00 MR: That's right. There was a reported one of 92.8 hitting in England a few times ago and it's reported that the budgerigar fell off its perch, but nothing much worse than that happens.

0:17:10 SC: It would not even be reported where I'm from. But there's also a philosophy question here that I thought you highlighted in the book even when you use phrases like a billion to one chance.

0:17:23 MR: Yes.

0:17:23 SC: What does that mean? Because it's not like you've done it a billion times and seen it happen once.

0:17:28 MR: Yeah, yeah.

0:17:29 SC: So how do we interpret phrases like there's that kind of probability.

0:17:32 MR: Well, of course there are some contexts where it is meaningful. If you have something where you left village being turned or something like that, then you can quantify it. But of course, as you say, we can't quantify any event's probability if it hasn't yet happened and that's the reason why we may tend to underestimate. So we can't quantify it.

0:17:56 SC: Well, I tend to be a subjectivist about probability. I don't know if you have strong feelings about the philosophy of probability, but as a good Bayesian, I think that all probabilities are more or less on the same footing. They're related to our credence that something is going to happen, but it's just harder to accurately put them down.

0:18:11 MR: Well, that's not true of coin tossing or of things like that. There are some where probabilistic estimates do make sense and have some basis.

0:18:19 SC: Well, if you're, see if you're an eternalist, if you think the past, present, and future are equally real, then the outcome of a coin toss is just your ignorance about the future. You can relate it. You can obviously give a sensible justification for having a certain credence, but I don't think it's philosophically different. But it does raise the question... The famous question in science is that what is the error... What are error bars on your error bars? Right?

0:18:42 MR: Of course.

0:18:43 SC: When you say well it is billion to one, could it be just a million to one, something like that? Is this the kind of thing that your center worries about?

0:18:50 MR: Well, I think it could be because some... That probably there are theories that are wrong, is of course not all that small.

0:18:57 SC: Right, right, and always hard to quantify.

0:19:00 MR: And that's why utter surprises can't be ruled out and of course they can't be quantified.

0:19:05 SC: Yeah.

0:19:06 MR: And that is I think the reason why we can't completely dismiss the concerns about doing something for the first time which has never happened in nature. Of course when we talk about these big experiments, in many cases and indeed I've been writing papers about this, you can say that nature has done the experiment already. Cosmic rays of high energy have collided and nothing disastrous has happened, and the fact that we can see stars, white dwarfs that are not turned into strange that's tell us something, but it is a potential effect that we can create artificially, which we don't think ever happened in nature. Then we perhaps should be open minded.

0:19:48 SC: Yeah.

0:19:48 MR: But we shouldn't take this too far. I don't think there's anything in nature which is at a temperature of a million degree Kelvin but we didn't have huge worries when we first cooled something down to a very low temperature like that. I suppose because we had very great confidence in the physics.

0:20:06 SC: Yeah, it does have to do with... It's a theoretical, I don't wanna say bias but, we have to proceed on the basis of thinking that our theories captures some truth, right? Captures some element of reality when we make these otherwise like you say, we couldn't do anything 'cause anything would bear some worries in some sense. But I like this example of the Large Hadron Collider and how we can ask whether nature has done it already because it gives a little bit of an insight to people who are not in this field that we're not flying completely blind. There is a process through which people do think about how you might rule out the likelihood of these existential problems. And you make the very good point in your book that, which I had not really thought of before, that we have an obligation to make that reason public, to inform people about why we think this. You really, you make it clear that we can't just say, let's trust the scientists on this. We need to spread our reasoning widely.

0:21:09 MR: Well I think especially when the scientists want to do the experiment. And have an interest in it so, I think in that sort of case, it's good to have some independent group of people who can sort of reassure themselves without having any self-interest in the experiment that I'd say. And although, I don't want to make a big deal of the accelerator, I don't worry about it at all to be honest.

0:21:31 SC: Me neither out there you know.

0:21:32 MR: No, no. But I think in the case of bio-experiments, then this is a real issue, and it need to take one example. There were two groups, which in 2011 I think it was one in Wisconsin, one in Holland, showed it was surprisingly easy to make the flu virus more variant, more transmissible. And these so called gain of function experiments were denied US Federal funding for a number of years because they were thought risky. Because, first of all, this is dangerous knowledge, which perhaps you don't want to publish, that's one question, but also does the risk of error, releasing something and it's now possible to theorize the small pox virus and things like that, and we do have to worry about, are there some experiments where the risk of something going wrong is such that you shouldn't even do the experiment.

0:22:23 SC: Right.

0:22:23 MR: And these are context where of course there are regulations and you don't just allow experiments to do anything they like. Because we are concerned about prudential constraints and ethical constraints on biological experiments. And it is a regime of regulation as you know, for the use of modern ideas and genetics etcetera. But of course what worries me, and this is really a theme of my book, is that whatever regulations one establishes even if they're internationally agreed can't actually be enforced. Regulations on nuclear activities can be enforced because they require special purpose conspicuous facilities, and so the IAEA can monitor these things. But when we think about biological experiments, they can be done in a lab with the facilities that exist in many industries and many universities. And of course cyber attacks of pretty damaging consequences can be done just by a lone weirdo etcetera. And we can't really stop experiments like that or unethical experiments on human embryos, and things like that, being done. And my concern is that whatever can be done will be done somewhere by someone, whatever the regulations say and enforcing those regulations globally would be as hopeless as enforcing the drug laws globally or the tax laws globally. And so, that's what scares me. And if you ask me what are the concerns I have in the next 10 years, not being too futuristic then it is the misuse by error or by design of biotech or cyber tech.

0:24:07 SC: Yeah, I think that that's exactly right, and it raises a couple of questions. So like you say, we have regulatory standards in place. On the other hand, they seem to be easy to circumvent by a bad actor. So is that an argument for actually letting controlled research go on? So I mean you can't just keep knowledge in the box, someone's gonna get it. It might as well be ours as well as someone else's.

0:24:34 MR: It's very difficult to say that we shouldn't do research. You can obviously control the rate by controlling the funding, but you can't really stop the research. Although obviously if the research is intrinsically dangerous or intrinsically unethical, then we should try and ban it. But as I say, it's very hard to enforce these bans globally. And in the context of bio and cyber, I think there is going to be a growing tension between three things we want to preserve; liberty, privacy and security, and of course this is an issue that comes up when you talk about regulating the internet and all that. And I think if you want to have global regulations you run into the problem that different parts of the world would adjust the balance differently. I think the Chinese would care less about privacy and more about security than the United States would and Europe somewhere between. But these are very serious issues which need to be addressed.

0:25:36 SC: What is the state of international cooperation on issues like this?

0:25:40 MR: Well, obviously, in health there is a World Health Organisation etcetera, but there's nothing yet established in the internet domain. And the problem there of course, is that the companies are global and...

0:25:53 SC: And they have vested interest in doing certain things that other people might not want to be done.

0:25:58 MR: That's right, so I think the serious problems of governance of the internet, which are being addressed, but I think enforcing them is going to be pretty hard, and unless you have really firm controls on the internet, which is quite contrary to the initial spirits and hopes of Tim Berners-Lee and the other pioneers where they wanted it to be free.

0:26:18 SC: But is it even realistic? Could we do it, could we really regulate the internet that easily? I'm not an expert in this...

0:26:25 MR: Well I'm not an expert either but I think one can have sort of censorship of content to some extent, but as regards the hardware and minimizing the risk of hacking, I don't know what can be done as these are serious issues. And they're gonna get more serious. And I think one of the most depressing things I find about the way technology is developing is that the fraction of efforts that go into security and all the things that wouldn't be needed if we were all honest is getting larger and larger.

0:26:56 SC: Yeah, and it feeds back into this idea that the time scale, this is sort of a mismatch of time scales. There's great promise in doing research in biology, or medicine or building technological infrastructure and there's also great dangers but there's this worry that we race towards the promise and the rewards first, and then try to fix up security later, right?

0:27:19 MR: Yeah, yeah.

0:27:20 SC: Is there some sort of global philosophical shift we might try to work toward to be more secure in our technological advancements?

0:27:30 MR: Well I think we should all campaign for this, but we know how hard it is to get international agreements on anything climate construal, and all that sort of thing.

0:27:41 SC: Yeah and we're sitting here in the United Kingdom, which has just decided to leave the rest of Europe, you know, it's hard to keep international organizations together and we're also sitting here just a few days after it was announced that Jeff Bezos's phone was hacked by Saudi Arabia. So, even the best protected people in the world could be very vulnerable.

0:28:01 MR: That's right.

0:28:01 SC: So what about, I mean, maybe it'd be useful to get on the table a list of your favorite existential risks, in some sense. [chuckle] Obviously, we've mentioned nuclear weapons, bio, or let me be a little bit more specific about your top 10 dangers.

0:28:17 MR: I think I'd want to avoid the word existential because the idea of wiping ourselves out is something which is relevant only to these rather science fiction-like scenarios.

0:28:27 SC: Isn't the name of your center, the center for existential risk?

0:28:30 MR: It is, and I think extreme risks or catastrophic risks would be a more appropriate representation of what we actually do.

0:28:37 SC: Good.

0:28:37 MR: And as I say, I think these are the downsides of exciting new technologies, bio and cyber being the most prominent. And of course, another class of concerns are those which are emerging from our collective impact on the planet. You know, the climate change is one and associated loss of bio diversity, etcetera. These are a class of long-term threats which everyone is aware of which are because of our collective actions rather than a few bad actors as in the other cases. And again, it's very hard to get effective action, because politicians tend to focus naturally on the immediate and the parochial. They think up to the next election and they think about their own constituents, etcetera.

0:29:30 MR: And it's very, very hard to get prioritization of the action which is needed to minimize these global risks where the community, the global community, has to act collectively and we're seeing this in the context of climate change, and attempts to reduce CO2 emission. So I say two things about this. What one thing which I feel quite strongly in the UK context is that we have set a target of cutting our net emissions to zero by the year 2050. This is a very challenging target. It will need some new technology. And I support the idea that we should have a very strong program to develop clean energy, better batteries and all that sort of thing. But the reason I support this is that if we succeed, then we will reduce our emissions to zero, but we're only 1% of the world's emissions, so it's neither here nor there but I think we in Britain can claim to have produced more than 10% of the world's clever ideas up till now.

0:30:35 MR: And certainly a disproportionate number, and so if we do have a crash program that does lead to cheaper energy storage etcetera, etcetera, so that India for instance, can leapfrog directly to clean energy when it needs a grid and not build coal fired power stations, we will thereby do far more than 1% to reducing world CO2 emissions. And it will be hard to imagine a more inspiring goal for young engineers than to provide clean energy for the developing world. And so that's why I think there's a strong case for enhancing R&D there and I would say the same thing about research into plant science and bio because feeding 9 billion people by mid-century is another challenge and this requires intensive agriculture if one is to do this without encroaching on natural forests, etcetera, etcetera.

0:31:30 MR: And here again, it's going to need new technology, which the scientifically advanced countries of which we are one can make a disproportionate contribution to. So I would say that if we prioritize research into those areas, things to do with clean energy and storage and into improved plant science etcetera, then we not only help ourselves but help the world. So that's one thing. That's a digression, but thinking about the politics, then I know people who've been scientific advisors to government and they normally get rather frustrated because the politicians have their urgent agenda, etcetera, and it's very hard to get them to prioritize something which is long term.

0:32:15 SC: Scientists are thinking more long term, yes.

0:32:17 MR: Yeah, yeah, yes, okay. But that's why I think that scientists can have more effect if they go public and become small time Carl Sagan's as it were because then they do have an influence. And politicians will respond if they think the voters are behind them. And so I think what the scientists can do is to publicize the issues in a way that the public responds to because then the public will influence the politicians and the politicians will feel that they can take these actions without losing votes. And I'm going to give two examples, one example, which I found rather surprising is the Pope because in 2014 there was a big scientific conference at the Vatican which had all the world experts in climate and people like Jeffrey Sachs and Joe Stiglitz etcetera, which discussed these environmental and climatic threats of the world.

0:33:14 MR: And that was input into the Pope's encyclical in 2015. And that encyclical got him a standing ovation at the UN and of course had an influence on his billion followers in Latin America, Africa and East Asia and help to ease the path to consensus at the Paris conference in December 2015 because the people in those countries knew that lots of people would support this. And as more parochial version in this country one of our less enlightened politicians proposed legislation to limit the production of non-reusable plastics, drinking straws and things of that kind. And he only did that because we'd had a year or two ago the Blue Planet II programmes funded by David Attenborough, which showed the effect of plastics in the ocean and especially iconic picture of a albatross returning to his nest and coughing up for its young not the long for nourishment, but some bits of plastic.

0:34:14 MR: And that's become an iconic image rather like the polar bear and the melting ice floe was for the climate campaigns. And because millions of people saw that then there was public support for the idea of cutting down the use of plastics and that certainly become quite an effective campaign in this country. So if the public cares about something then politicians will respond, even if it's long term and part of a global campaign and that's why we should encourage and value the scientists who are able to get through to the wide public because that's more effective than being a science advisor in-house to government or politicians.

0:34:57 SC: Yeah. No, that's a very, it's a very interesting point that if a scientist... You're saying if scientist wants to have a real impact on public policy then the public should be as much as the target as the politician?

0:35:06 MR: Yes. Yes. Yes and because politicians respond to what's in the inbox and what's in the press and what they think their voters want, obviously, and so that can be influenced by charismatic public figures with the scientific background.

0:35:21 SC: Good, very good. And so I want to get back up a little bit because maybe we can be a little bit more specific as an example of good action here. If you think that Britain wants to be zero emissions by 2050?

0:35:37 MR: Yes.

0:35:37 SC: What would that involve? What kind of technology is that? Obviously some renewable energies, some storage, but can we be more specific?

0:35:44 MR: Well, I mean, I think storage, more cheaper more efficient batteries and one has to store the energy from day to night and maybe even seasonally so we have to do that. I personally think it would include nuclear.

0:35:58 SC: Okay.

0:36:00 MR: And of course this is controversial but I think the problem of nuclear is there's been no real R&D in the last 20 years. The designs being used really date back from the 1960s. And I think therefore that in trying to develop clean energy that should include fourth generation nuclear and things like small modular reactors, which could be put on the back of a truck and it'd be a hundred megawatt each or something like that and something like that would be safer and hopefully if they could be mass-produced more economical. So I think that nuclear is probably going to be part of the answer, without that it's going to be harder.

0:36:42 SC: So do you think that the relevant... The related issue is there with not emissions but they're still nuclear waste. Do you think that those are solvable?

0:36:49 MR: I think they are. Yes. I mean, there are a problem, but I think they can be coped with. And incidentally, I think people worry too much about low radiation levels that's...

0:37:01 SC: Another place where scientists maybe can have a various impact somehow.

0:37:06 MR: Yes. Yes.

0:37:06 SC: Explaining that this is not so bad.

0:37:08 MR: Yes. Well, there is a dread factor in a radiation, isn't it?

0:37:11 MR: Yes, that's right.

0:37:12 SC: And well to digress a bit, I think it's very important to have clear guidelines about the dangers because suppose a dirty bomb were let off in a street in New York or something like that then people might look at some table of risks and say we've got to evacuate this street for the next 30 Years, which may be a sort of over-the-top, but after an event like that is not the time to have the debate, you have to have the debate beforehand. And of course an example of this which is we have studied at our center is the aftermath of the Fukushima disaster in Japan which was serious, but the evacuation was over-the-top and probably caused far more distress than the radiation itself would have done.

0:38:01 SC: I didn't know that, actually, yeah. They over evacuated?

0:38:05 MR: They over evacuated. Yes. And I think what ought to happen is there should be sort of guidelines which are made available to every city about what the extra risk is for multicolored level because, if I was an old villager in Japan, I'd be prepared to accept a significantly higher risk of cancer in order to live out my days in my home rather than being evacuated and people ought to be given that choice and of course, old people might make a different choice from people with young children, but there ought to be some rational way in which people can make that choice otherwise there would be an overreaction. And this is something which is particularly the case in terms of radiation.

0:38:47 SC: This is just a human foible though, right? We're very bad at talking rationally about what happens to us in extreme situations where when our lives are at risk. Like once you say to an old villager, "If there is radiation here, don't worry about it, you can stay here 'cause you're old and you're gonna die anyway." Like this is not an easy conversation to have.

0:39:06 MR: Yeah. Yeah, yeah. Well, I think you'd have to actually ask them in advance. You could tell... Ask everyone you know, well if what risk will you had to tolerate if the alternative is evacuating from your home?

0:39:19 SC: Yeah, yeah. But yeah, I don't know what the answer is. I mean there certainly is a philosophy question here, a whole bunch of them. Are philosophers heavily involved in your center, when you're making decisions about these big big questions?

0:39:32 MR: Yes. Well, in fact one of our senior people is a philosopher and is particularly concerned about the issue of future generations because there is a question of what discount rate you apply and how much we should think about future generations rather than those who are now alive. This is a real question. And in fact, we have set up a parliamentary committee to try and raise the amount of attention given in old legislation. To the effect in the long term, because many of the things we're doing are having some effect, obviously. This is something we ought to try and emphasize and not discount the future as much as a commercial transaction does.

0:40:15 SC: Have you found politicians more or less responsive to this, or...

0:40:19 MR: Well I think they are responsive, as long as the sacrifice for the present generation isn't too much, but it is important, obviously, to think longer term and to realize that even though if you make a commercial decision on an office building or something, if you don't get your money back in 30 or 40 years, you don't do it. Whereas in making a decision that may affect the longer term future, people do care about the life experiences of babies born today who'll still be alive in the early part of the 22nd century.

0:40:50 SC: I get that already.

0:40:51 MR: And so we ought to think longer term. It's hard to predict, of course, because things are changing so fast. And one of the points I make in my book is that it's at first sight paradoxical that in medieval times when people had very limited horizons, and when they thought the world might end in a thousand years, they still built cathedrals that wouldn't be finished in their lifetime. That might seem paradoxical, but whereas now, we would not do something like that, you see. But the reason it's not paradoxical, is that in medieval times, even though they thought the whole world might end, they thought the lives of their children and their grandchildren would be more or less like theirs. So the grandchildren would appreciate the completed cathedral. Whereas now I think we don't have very much confidence in predicting what everyday life will be like 50 years from now, given the changes that have been in the last 50 years. And so for that reason I think it's rational to not plan quite so far ahead in some context. But on the other hand, if there's a risk of doing something which is irreversible long-term damage we ought surly to care about future generations.

0:42:09 SC: Does the example of the environment and climate change... It has some depressing aspects in how resistant certain quarters have been to the obvious scientific evidence. How much do you worry about misinformation or just vested interests resisting the scientific findings?

0:42:30 MR: Well, very much so of course, and this is apparent in that context of climate change where the science is, of course, very difficult, and is very uncertain. We've got to accept it's uncertain. But I think we do have to worry about the reluctance to accept scientific evidence, not in that context alone, but in the dangers of vaccines and things of that kind. That's another example where, of course, there's a lot of misinformation, which is damaging.

0:43:00 SC: Being from Los Angeles, I'm well aware of this phenomenon, the epicenter of that particular phenomenon. But climate change is also an interesting case, because it's so different in character than something like a nuclear war or even a terrorist nuclear bomb, right? Slowly creeping up on all of us versus a tiny fraction every year that it would happen. Is it a different strategy, different mental space you need to be in to attack these different problems?

0:43:24 MR: I think it is, and I think, of course, in both cases, hard, because in the case of the sudden catastrophe, then the point is we're complacent, because it's never happened and you think it never will. It's like in the stock market where it keeps on rising.

0:43:39 SC: It will rise forever.

0:43:40 MR: And you think it'll go on, but then there's a sudden fall. There's no symmetry between the speed of rises and the speed of falls. And similarly, you think that things are going to be okay and they're not. And one of your predecessors is a Darwin lecturer, who's Mr. Taleb, who made some rather good points about Pareto distributions, where things are below average 98% of the time.

[laughter]

0:44:03 MR: And it's a long tale that's important.

0:44:05 SC: Right, yeah. And so let's fill in a little bit. I wanna get all these different kinds of extreme risks. I think is a good way of putting it, and I think most people know about climate change. In fact, we talked about it on this podcast. There's the nuclear threat, which maybe was larger in the past for a large-scale nuclear war. These days are you much more worried about one bomb being taken into a port and blown up?

0:44:33 MR: Yes, that's right. I think we can say that during the Cold War, there was a risk of a real catastrophe, because there were about 50,000 bombs on both sides, and if they had gone off, then that would have devastated much of the northern hemisphere, certainly Europe and North America.

0:44:52 SC: And there were at least a couple of moments when it was about a real chance.

0:44:55 MR: Well, indeed. And I think when we see what people said. Well, McNamara in his later years, and Kennedy saying that the risk was between one in three and evens and all that, and of course, the other false alarms that we've learned about. I think we realized just how great the threat was. And for those of us in Europe, I think the realistic estimate of the risk during the Cold War was probably one in three or something like that. And it's hard to quantify, obviously, but I think looking back, there was a substantial chance. And I personally think that if people had realized that, they would have been a bit more questioning of the conventional policy. For my part, I wouldn't have been prepared to risk a one in three or even one in six chance of a nuclear exchange of that kind, even if the alternative was a Soviet takeover of the whole of Europe. And I suspect many people have taken that view, but they didn't really feel that there was this real risk.

0:46:03 SC: Right. And saying those words out loud is also gets you in trouble politically, maybe a little bit.

0:46:07 MR: Well, it might have been for some people at that time, but I think, realistically I think that will be the trade of, that would've been far better to let the Soviets take over than to have destruction of the whole fabric of Europe. So that was the situation of the Cold War. But as you say, that particular scenario is at least in abeyance, because the number of weapons on both sides has been clocked by the factor of 10. But on the other hand, there are two things to worry about, well actually three things. One is that there are more nuclear powers, so now 10 nuclear powers. And the risk of some nuclear weapons going off in a regional conflict in the Middle East, or India, or Pakistan is probably higher than ever.

0:46:56 SC: Yeah.

0:46:57 MR: And that would be a regional disaster unless it produced on such big forest fires they do have in nuclear winter, but it would probably just be a regional effect. So that's one concern. But the other point is that the global threat may be just in abeyance, because it could be that, as a new stand-off in the second half of the century put in new superpowers present less well or less luckily than the Cold War was. So that's a concern. And the third concern is that... Again, something we've been having meetings about here, that's the risk of cyber attacks of the nuclear infrastructure is a new threat, because obviously it's very complicated.

0:47:42 SC: So what exactly is the threat there? That a cyber hacker could do what?

0:47:46 MR: Could trigger false alarms or even trigger bombs going off.

0:47:51 SC: Right, okay.

0:47:53 MR: And this is... One hopes that it's being addressed.

0:47:55 SC: Yes, one hopes.

0:47:57 MR: And it is being addressed. But on the other hand, in all these cyber issues, there's an arms race between the cyber attackers who are indeed aided by AI and of course those who are trying to secure things. So this is a new concern, a new class of risks quite apart from old fashion types of false alarms.

0:48:16 SC: And the lone actor, who is not even a state, is a new worry for nuclear weapons, right?

0:48:22 MR: Yes. I don't know to what extent a lone actor could realistically do that, but I think a small stake.

0:48:29 SC: And presumably it becomes not less likely over time right, as technology leaks out, as the information gets out, as more and more plutonium and uranium are flying around. What kind of risk is it possible to quantify that we have?

0:48:45 MR: Well, I wouldn't venture to quantify, but obviously there are far greater risks. This is just another instance of how a small group of people, amplified in their impact by modern technology, can have a consequence that cascades globally. This is what I worry about. As I put it in my book, "The global village will have it's village idiots", but they will have a global range now, which they didn't in the past.

0:49:10 SC: And is the biological worry bigger in your mind than the nuclear worry? I guess there's two categories, right? There're sort of poisons like anthrax that you can spread and then there're these contagious agents that you can imagine and they both sound terrible.

0:49:23 MR: Yes, yes. I think the consequences of a natural pandemic could be worse, of course. SARS wasn't on global effect because it only spread to places that could cope with it, I mean Singapore and Toronto, mainly. But had something that spread to one of the mega cities of the developing world, like Mumbai, or Casablanca, or something. Then it could have been very, very serious. And that's true of any future one. And of course, we have plan for thing of that kind. How would you cope? Would you turn off the mobile phones because they spread panic and rumor, or would you leave them on so you could spread information? That's a sort of socio-logical issue which needs to be addressed in planning for these disasters.

0:50:08 SC: Which should you do? Now you have me curious.

0:50:10 MR: Well again, I don't know, but that's an example of a context where you need social scientists to explore what is the optimum thing.

0:50:15 SC: To run the simulation.

0:50:17 MR: Yes. So I think we do need to worry about natural pandemics, and of course engineered pandemics. And also, we certainly have to worry about the ethical consequences of new technology. Already this is an issue with the Chinese experiments on human embryos and all that.

0:50:33 SC: That guy got arrested, right? The guy who claimed to have genetically engineered a baby?

0:50:37 MR: Yes. In fact, all the Chinese scientists were equally opposed to what he had done. The balance of benefits and risks was not such as to justify it, whereas one can justify perhaps gene-editing if you remove the gene for Huntington disease or something like that. But what he was doing was deemed by almost everyone to be unethical. But we do have to worry about the widespread understanding and availability of these techniques. And of course we have to worry about the social effects of these, if they can be used for human enhancement, for instance. I think most people think that if you can remove some severe potentiality of the disease, then that's okay, but if you can use genetic modification for enhancement, then that's different. But fortunately, that's a long way in the future because most qualities that we might want to have in human beings, your looks and intelligence etcetera, they are complicated combinations of thousands of genes.

0:51:39 SC: Very, very complicated.

0:51:41 MR: So, before you can do that, you've got to first use AI on some very large sample of genomes to decide which culmination's optimal, and then be able to synthesize that genome and be confident it's not gonna have unintended downsides. So that's fortunately a long way in the future. But if that happened, then that would be a new fundamental kind of inequality that we'd have to worry about. And it would perhaps be a bad news because, if we think of the effects of medical advances in the last century, they've had a beneficial effect on promoting equality because they're cutting down an infectious disease in Africa, for instance, as are the huge effects on infant mortality. I think we can say, looking back, that medical advances have been beneficial not just for those who have them, but for promoting equality worldwide, whereas these future ones may have the reverse effect.

0:52:37 SC: In a sense, because we're now inventing medical procedures that are just financially out of the reach of so many people. Yeah. But it is a good point to segue into something because there's the topic of extreme risks, and then there's also the topic of extreme benefits, and some of these technologies are ambiguous. They're transformative, human gene-editing, AI, more things that we're stumbling across right now in the 21st century. And that's certainly an example where some people are gonna wanna just rush ahead without thinking about the consequences.

0:53:13 MR: Well again, we can't stop it and we can't control it just like we can't control the drug laws or the tax laws, and perhaps, in a way, if some crazy people want to do this perhaps we should not be too upset because obviously every technology in medicine starts off as risky I mean heart transplants and things like that and then become more routine. And so perhaps we should not be too upset if a few people try to modify themselves in ways that turn out not to be as beneficial as they help. And of course, techniques like enhancing your brain by some Cyborg implant into your brain. I mean it would be great if we could improve our memories and all that and so it's perhaps a good thing that some people are talking with a straight face about the potential of doing this.

0:54:06 SC: I mean as you said it's hard, but let's just imagine because that's we're doing here, the future. Let's imagine someone develops a diagnostic test that lets you screen thousands of embryos and pick out the one that will be the most intelligent. And less prospective parents take advantage of this technology. Would that be bad?

0:54:26 MR: Not necessarily. Of course, it's interesting to know whether they would see more intelligence because there was the famous case of Mr. Shockley, do you remember who was, Shockley was the inventor of the transistor. He set up a sperm bank for nobel prize winners.

0:54:43 SC: Oh right!

0:54:44 MR: I'm glad to say there was very little demand.

0:54:48 SC: I guess that's true, that's true. I just wonder, again on a philosophical level, there's a icky-ness factor but I think maybe sort of engineering babies sounds weird Dr. Strangelove-y, but maybe there's just an inevitability about it also. Once people get over that it's gonna be taken for granted. Well, of course, we should make babies as good as we can. And I honestly, personally, don't have a strong understanding one way or the other.

0:55:13 MR: Yeah, yeah. No, no, maybe we should. But as you said there is a yucky or icky factor about some genetic modification, even in animals. And I think the reason why GM crops were opposed in Europe was because people associated Genetic Modification with things like making a rabbit glow in the dark by putting jellyfish genes in it or something of that kind, which did promote the yuck factor. It's rather like circus animals being given silly clothes to wear. It's not what you should do to animals and people felt the same about that, and that really perhaps went against the support for the use of GM techniques in very good cases.

0:56:00 SC: And certainly we can't be irresponsible, not to mention, everyone's favorite new extreme risk which is super intelligent artificial intelligence, right? Do you have an opinion on whether or not that's a real risk, or a fake one?

0:56:13 MR: Well, whether it's a risk or opportunity we don't know, but I think obviously the question is, Where there's general intelligence we know already we've had for 50 years, machines can do arithmetic better than any human being, and of course, we've now got machines that can play chess better any human being and do many other things. But of course there's still many things they can't do as well as a human. And the question is, will they be able to acquire common sense as it were? They're requiring greater capabilities and AI is very, very valuable for coping with large data sets and optimizing complex systems like the electricity grid of a country's traffic flow and incidentally the Chinese will be able to have a planned economy, of a kind that Marx could only have dreamt of. Because they have records of all transactions, all stocks in the shop, etcetera, so that they could do that.

0:57:14 MR: So, AI does have tremendous power to cope with complex systems, but as regards the sort of robots which we can treat as intelligent beings as in the movies that's quite a long way away, because robots are still rather bad at interacting with the real environment they can't even move pieces on a chess board as well as a kid can. They can't jump from tree to tree, as well as a squirrel can so there's a long way to go before robots have that sort of agility and interaction, and before they have any feeling for the external world, because someone told me that the Watson Computer which won the game of jeopardy was asked which is bigger: A shoe box or Mount Everest. And couldn't answer.

0:58:02 MR: And then that's because obviously it doesn't, it understands words and things but it has no concept of the external world, etcetera. And to give a machine that concept, and indeed to get you understand human behavior is a big challenge. It's a big challenge, because from the computer's perspective everything we do is very, very slow. And computers learn by looking at million of examples of pages to translate or pictures of cats and all that and recognize them. But watching human beings is like us watching trees grow, it's very slow and they can't accumulate the amount of data in order to really understand that and so that's a big impediment, so that's a long way of saying that I think it'll be a long time before we have human general intelligence in a sense of saying, which behaves like a human being or lower obviously we will have machines that can cope with huge data sets, and a clean up on the stock market and all those things.

0:59:02 SC: Right. So, of course, yes the getting artificial intelligence up to general human scales will be hard, right? But do you think it's possible in principal?

0:59:12 MR: There's no fundamental objection to it I suppose because you could imagine in principle, a machine which has sensors which allows it to interact with the real world and maybe even communicate with human beings, so it's not impossible. But to whether there'll be a motive for it I don't know, because I think what we've got to remember is that there's a gap between what could be done and what there's economical or social motive for doing and that's why something surge very very fast and level off. Just to digress for a bit, the most rapid technology ever really is smartphones which have developed and spread globally within a decade or so, but probably they will saturate, now, the iPhone 11 is probably as complicated as you want a smartphone to be and so probably 20 years from now we'll be using similar smartphones.

1:00:12 MR: And so a sigmoid curve is what happens to most individual technologies, and then something else will take over. And I think we've got to be mindful of the point made in the famous book by Robert Gordon about the important technological advances and he makes the point that what happened between 1870 and 1950 in electricity, the railways, television, cars and all that. That was more important than anything that has happened since. And there's something in that. And let me give you another example, Aviation. 1919, Alcock and Brown first transatlantic flight. 50 years after that the first commercial flight of jumbo jet, 1969. And now we're 50 years after that still a jumbo jet. And the Concorde came and went and so that's an example of how technology developed when there's a motive, but then it levels off when things are final, when there's no economically feasible way of developing it further. And we got to realize that that may happen to some of these information technologies which are developing very fast. So we can't assume that because they're developing so fast now that there will be equally rapid changes in the next 10 or 20 years, but there'll be some obviously.

1:01:29 SC: Yeah. In the case of the smartphone, I guess the obvious next big phase transition would be to get a direct interface with the brain and the computer.

1:01:37 MR: Right now that'll be very big jump, of course, right.

1:01:39 SC: But we're not quite there yet. People are trying but that's not to be the next five years.

1:01:43 MR: Yes, yes.

1:01:44 SC: Okay, but still if artificial intelligence could be... Even if it's not precisely human, even if it couldn't pass the Turing test there is a worry in certain circles that it will nevertheless be powerful and it won't have the same values that we do. You can imagine something that is hyper intelligent along some axes and yet isn't all that interested in human flourishing. Is that something you worry about?

1:02:13 MR: Well, of course, you've got to worry about two things. First of course, if we have the internet of things then of course, that means that it's hard to avoid a scenario where an AI can interact with the real external world. It can't be kept in its box. But the question is does it have its... What motives would it have? We just don't know. I think it's not it has motives but it may do things which are contrary to human interests. Obviously, that's a possibility, that's true of any machine.

1:02:42 SC: And is there any way to... I mean we can sit and fret about it. Is there any strategy for planning for that? I mean, many big names have started warning about this but I'm not quite sure what the actionable thing to do it is.

1:02:54 MR: Well, I don't see it in principle different from volume the misuse of any other technology really. So I don't worry so much about that.

1:03:03 SC: You don't worry so much about that. Okay. Very good. What is your more positive spin on what AI can do for us or what these kinds of technologies will help us achieve?

1:03:11 MR: Well, I think it's clear. It can optimally cope with networks, electricity grids and things of that kind and obviously it can help scientists with discoveries and I think to take one example, if you want to have a room-temperature superconductor then as you know, the best bets are these rather complicated compounds and things.

1:03:34 SC: Materials, yeah.

1:03:34 MR: And rather than do lots of experiments than maybe AI can explore the parameter space and come up with that. And ditto with drug development, so I think it can help. And perhaps, moving closer to our own field of science. I think it may help us to understand some fundamental problems in cosmology because supposing that some version of string theory is correct and that theory applies to the early universe. It could very well be that it's just too difficult for any human being to work through all these alternative geometries in 10 dimensions etcetera, which you know far more about than me. It may just be too difficult. But on the other hand just as a computer learn to play world-class chess in 3 hours given just the rules then it could very well be that a machine could do the relevant manipulations and calculations in order to follow through the consequences of a specific string theory.

1:04:40 MR: And the varieties of string theories and of course if it turns out that at the end it spewed out the right mass of the proton things like that, then we know it's on the right lines. And this may be a kind of scientific discovery where it never gives any human an insight because it's just too complicated but nonetheless we would know that it was developing a correct theory. And so I think we have to bear in mind as there may be some theories which are correct, but we can never sort of have the insight into them which we hope to have into the theories of physics at the present time simply because they're too complicated but nonetheless we would have confidence in them because it would do a calculation and it will come out with something you could compare with experiment for numbers of neutrinos, masses etcetera.

1:05:29 MR: And if we had such a theory we will then believe its predictions about the early Big Bang, because those things are supposed to apply to the conditions early on and so we would have reasons to be able to decide whether many Big Bangs are not one, what were they like and all that. And so I think this is just an example of how in science the capability of machine to do very very complicated manipulations of ten dimensions of geometry, which I think is, a less remote goal than understanding human behavior. That's the kind of thing computer could be good at. And that could be very very important for science. The extreme kind of physics that we are interested in ourselves, but also developing in terms of high TC superconductors and drugs and all that. So I think the power of AI to do not just routine computations, but to explore parameter space and learn is going to be very powerful and beneficial.

1:06:30 SC: It's been a while now since the four color theorem was proven. I remember vividly.

1:06:34 MR: Right. But that's an example of that.

1:06:35 SC: That's an example of the computer doing something and it's essentially.

1:06:39 MR: But there again, we knew what it was doing. I think in the case of string theory, we may not understand what it's doing. It's just like in some of the very clever moves that the AlphaGo Zero made playing Go. The experts didn't understand how it chose that move and I think it maybe more like that. I think in the four color theorem, everyone knew exactly what the program is doing, etcetera. Whereas, this might be qualitatively different from that in that, we don't really understand what's going on. And of course, this is a problem with AI already because, if decisions about whether you should be let out of prison, whether you deserve credit or whether you need an operation. If they're made by machine, then you're a bit worried about that, even if you have evidence the machine on the whole makes better more reliable judgments on human. You feel entitled to an explanation you can understand. But of course, that's not always the case when computers are now used and it may never be the case, if they're used to solve these very difficult scientific problems, which involve a huge amount of calculation.

1:07:45 SC: Yeah. There's a whole regime in which we can imagine computers solving problems and then not being able to tell us why they got the solution that they did. Then scientifically, that would seem to be not really satisfying in some way like we would think is further work to be done that's all we had.

1:08:01 MR: Well, that's right. Because obviously, what... Satisfaction science is having some idea and then you get the "aha" insight when you realize it's just got to be that way. And that's the most exciting thing that happens to you if you're a scientist, but we would never have that. But nonetheless if it spews out the correct values for the fundamental constants and things like that, then you'd have to accept that it's really got some insights.

1:08:31 SC: What are your feelings about uploading a human brain into a computer?

1:08:36 MR: Well, I mean, is it ever going to be feasible? I don't know. But then, of course the question is would it really be you?

1:08:44 SC: Yes. That's fine.

1:08:45 MR: Because. I think our personality depends on our bodies and our interaction, our sense organs. So would it be you. So again what philosophers discusses? If you're told this have been done. Are you happy to say well you could be destroyed. You're there and what happens if several clones are made of you? Which is really you. So I think there are all kinds of fascinating philosophic conundrums which philosophers have talked about for a long time. But of course, maybe one day they will be practical ethics if we can do things like that. But I think we do need to worry about whether that's possible. And of course if we think a few centuries ahead then even without that, human enhancement may have changed human beings. And we're not evolving on the times gonna doing selection.

1:09:32 MR: We're evolving much faster or we could do but through these techniques. And one other point, I make in my book is that when we read the literature left by the Greeks and Romans, then we can appreciate it because human nature was the same then so we have some affinity with the emotions of those ancient artists and writers. Whereas a few hundred years from now any intelligence is if its still around, may have no more than an algorithmic understanding of us because they may be fitted different, that they don't have anything that we would call human emotions and human nature. So that's going to be a real game-changer, but that could happen over a few centuries.

1:10:13 SC: Because of human-machine interfaces or?

1:10:17 MR: Human-machine interfaces or drastic genetic modification and human enhancement which you were talking about.

1:10:24 SC: Do you think that also longevity is a frontier that might change things dramatically if that happens?

1:10:31 MR: Yes indeed, yes, and it could and of course people are working on this as you know, some people think that aging is a disease that can be cured. Some people think it's just incremental improvements. But of course if aging can be slowed down so that people live to be 200 this of course would be a huge sociological change. And again if this was available to only some subset of people this would be a huge and fundamental inequality. And the question is what will go with your multi-generational families or would you delay the menopause and all that. It'd be crucially different and so we'd have to be very worried about what would happen if changes of that kind were possible. But again, we can't exclude them I don't think. And of course we know the people like Mr. Ray Kurzweil, who think that they'll be immortal either by that way all by being able to download them and there would be people who want to have their bodies frozen in order that they can be resurrected when this happens and then...

1:11:33 SC: Given all the existential risk that we have talked about that doesn't seem like the best strategy to me.

1:11:38 MR: No, but in fact, I'm amused that three people I know, I've got to say they're from Oxford not from my University, have paid good money.

1:11:47 SC: They signed up.

1:11:47 MR: To be frozen by this company in Arizona.

1:11:50 SC: Okay.

1:11:51 MR: I think it's $80,000 a cut-price if it's just your head being frozen. And, and they hope this company will keep going for few centuries and then they can be revived, and of course, they've got to have their blood replaced by liquid nitrogen etcetera. And they go around carrying some medallion so that people know that's to be done as soon as they drop dead.

1:12:11 SC: They're to be saved.

1:12:12 MR: Yeah, so I think it's great. I also think actually it's selfish because supposing that this worked, then, they'd be revived into a inconceivably different world. They'll be refugees from the past and they'll be a burden. We feel that we've got to look after refugees or some people from Amazonian tribes whose habitat has been destroyed. But if these people voluntarily as refugees from the past impose a burden on you then maybe they can put you in a zoo or something like that. It's not clear that it's an ethical thing to do.

1:12:47 SC: I think I'm on your side, but for purposes of playing devil's advocate here, there's a similar calculation that we do with the existential risks. If you thought that there is a 100th of 1% chance that 300 years from now, we cure aging. And you could literally live forever. Then clearly the benefit to you of being frozen, even if it doesn't work very well is almost infinite, right? There's definitely a calculation that ends up with the conclusion you should do everything you can to try to preserve yourself.

1:13:21 MR: Yeah, yes, yes. And the chance is zero if you don't do this.

1:13:24 SC: Right. That's right. So what about then, as another survival strategy, going into space? I know that Elon Musk has said that one of his motivations for SpaceX is to back up the biosphere by making sure we have some of us on other planets if something goes dramatically wrong here on Earth. Is that a good survival strategy for the human race?

1:13:46 MR: Well, I'd be skeptical about these arguments. I mean, of course, I'm especially interested in space being an astronomer, and I think AI and miniaturization is going to be crucially important for the science of space exploration. I think there'd been wonderful probes sent into space. Cassini orbiting around Saturn, its movements and all that, and New Horizon, which took pictures of Pluto and all that, but all those were 1990s technology. Probably so they took 10 years to build, and and 10 years to get there. And if we think of how things like smart phones have changed since then, we realize how much better we can do now. And so I really hope that there'll be swarms of miniaturized probes going throughout the solar system to explore it in detail. I think that's realistic. And also wandering around on the surface of Mars, etcetera.

1:14:48 MR: But as it regards people, then with every advance in miniaturization of robotics, the practical need for the people gets less. And that's one reason why, of course, man's space flight has not been prioritized so much. Of course, there is a revival of interest, and I personally think that if I was an American, I wouldn't want any taxpayer's money to go on NASA's manned program. Because there's no practical need for the people. But on the other hand, I'd be glad that Mr. Musk and the others are developing very effective rockets, bringing the Silicon Valley culture to a branch of industry that was dominated by the big conglomerates like Lockheed and doing great stuff there. And I hope that they will be sending people into space. But there will be adventurers prepared to accept a high risk, because one reason why the NASA manned program is very expensive, was it was very risk-averse. The shuttle was launched, I think, 135 times, and there were two failures. Less than 2% failure rate, but each of those failures was a big national trauma, because it was presented as safe, and they sent up a woman school teacher and all that. So I personally think that space should be left for adventurers, the kind of people who hang glide in Yosemite and things like that, and are prepared to accept a very high risk.

1:16:20 SC: Right the risk is just cost of doing business, and that's built in.

1:16:23 MR: Yeah, yeah, and there are people obviously, adventurers who are prepared take that risk and even a one-way ticket to go to Mars. And but Elon Musk has said he hopes he'll die on Mars, but not on impact, and good luck to him. And we should admire these people and cheer them on. But I think the idea of mass immigration is a rather dangerous delusion. Because we've got to realize that dealing with climate change, for instance, is a challenge, but it's a doddle compared to terraforming Mars. And so I think we've got to accept that there's no planet B for ordinary risk-averse people, and so we should encourage people to go to Mars, but that's less comfortable than living at the South Pole and there's not many people want to do that. So I think we should encourage it. And I think actually, looking further ahead, I think we should cheer on those who do try to establish a colony on Mars, even though it'll be against the odds, uncomfortable, and that's because those people will have every incentive to use all the techniques of genetic modification and cyborg techniques to adapt themselves. We're pretty well adapted to living on the Earth, but they'd be in an environment to which they're very badly adapted. So they'd have every incentive to use these techniques to adapt to different gravity, to different atmosphere etcetera.

1:17:50 MR: And maybe if possible, to download themselves into some electronic form, and if they do that, then they may prefer zero G, and they may not need to be on the planet at all. So I think that these post-human developments, which will happen on a technological timescale far faster than the dominion time scale, they will happen fastest among these crazy adventurers who try to live on Mars, and so the post-human era will start from them.

1:18:22 SC: It does seem almost inevitable that the first trip to Mars will be a one-way trip. It's so hard to come back. And that is a good argument that it won't be done by a government, right?

1:18:33 MR: That's right, that's right.

1:18:33 SC: NASA would never send astronauts to Mars not to bring them back.

1:18:36 MR: There could be lots of volunteers to go one way, yeah.

1:18:39 SC: Right. But the government would be having a hard time.

1:18:41 MR: And of course, I think as human beings, we should admire and cheer on these people, because then they may indeed hold the post-human future in their hands.

1:18:52 SC: And I guess I hadn't thought of the idea that people who dwell in space, whether it's on Mars or inbetween would be natural candidates for genetic modification, but it makes perfect sense. So, in some sense a different species out there.

1:19:07 MR: Indeed. They'll become a different species yes. And, of course, whether they would be entirely flesh and blood or whether they will by then be cyborgs, or even downloads into something electronic, we don't know, but if they become purely electronic then of course they won't want to stay on a planet. And of course if they're near immortal then interstellar voyage is no deterrent to them and so they will spread. And this has a relevance in my view, to SETI projects because, yeah, I think, obviously, the search for extraterrestrial biospheres is a mainstream part of science now that we know there are exoplanets out there many like the earth and within 10 or 20 years, we'd know if any of them have biospheres, but of course what people really want to know is are there really any intelligent aliens out there? And that's why there are these SETI programs to look for extraterrestrial artifacts or transmissions which are manifestly artificial, and I'm an enthusiast for modest expenditures on all these possible searches because it's a fascinating to all of us.

1:20:20 MR: But if you asked what I predict, I particularly if anything is detected, it won't be any sort of flesh and blood civilization on a planet, it will be some sort of maybe burping and malfunctioning artifact probably roaming in space. And the reason for that is, let's suppose that there was another planet where things are rather like on the earth, and what's happened here on Earth is four billion years of Darwinian evolution and we're now in a few millenia of technological civilization, but within a few hundred years as we've discussed it may have been taken over by some sort of electronic entities, not flesh and blood. And they, in principle, have a billion year future ahead of them, for example they may be immortal and they can create copies or developments of themselves, etcetera. And so, if there was another planet where things have happened differently, it's unlikely to be synchronized to within a few millennia so that we would detect a civilization like ours.

1:21:29 SC: It's not in the same stage of technological development.

1:21:31 MR: No, so if it's behind by a few tens of millions of years then of course we see no evidence of intelligence, we see a biosphere. If it's ahead, then of course, we would not detect anything like what's now on earth, but we might detect some technological artifacts or their emissions and so that's why if we do detect something, it's far more likely to be something like that, some electronic artifact.

1:22:05 SC: Yeah, I personally think that the idea...

1:22:06 MR: And you have to modify the Drake Equation, because the Drake Equation talks about the lifetime of civilization thinking of something with lots of independent flesh and blood entities, whereas it may be one super brain and may be something entirely electronic. And that could persist for billions of years, even if a civilization likes ours can't persist for more than a thousand years.

1:22:28 SC: I've often thought that the idea of taking a big radio telescope, and listening in on the sky, for other advanced civilizations was a very, very, very long shot, because why in the world would advanced civilization, waste a bunch of energy beaming radio signals in random directions?

1:22:46 MR: Yes, yes, but I think you know it's sensible to do everything we can, as a byproduct of...

1:22:53 SC: It's relatively cheap.

1:22:54 MR: Yes and some may support their efforts to look the radio transmissions, narrow frequencies and look for optical flashes and things like that, and we should also look for artifacts, we should look for evidence of some star that's orbited by something that's beneficially artificial, we should even look for artifacts in our solar system or something in the asteroid belt especially shiny etcetera.

1:23:23 SC: On the moon, 2001, monolith...

1:23:24 MR: On the moon, yes indeed.

1:23:27 SC: I mean, one argument is that if intelligent life formed frequently, it would build self-replicating robots and it should have filled the galaxy a long time ago, right? And this speaks to the Fermi Paradox.

1:23:39 MR: Yes, well, of course, the Fem Paro is one important relevant fact about this, but I think a reason why I don't think it's a watertight argument is that if these future entities are electronic, it's not clear they will be expansionists, because we have evolved by Darwinian selection, which favors intelligence but also favors aggression. And so that's why people talk about a sphere that's expanding and etcetera, and the few aliens that have come here, but if the intelligences are electronic, then they may be entirely conjugative, they may not want to expand, and so they could be out there without manifesting their presence in any conspicuous way and come here. So I don't think we can say that the galaxy doesn't contain anythings like that, we can say that it can't contain many civilizations which have led to massive expansion, because we some what have got here already. But I think if the scenario is that advanced intelligence is electronic, then that need not be associated with expansionism.

1:24:50 SC: The flip side of that though is I've wondered about this out loud, is it possible that we do sort of gradually or suddenly upload our consciousness into electronics and then exactly because we don't have all these sort of thermodynamics survival instincts anymore we stop caring? Even forget about expanding just existing is less interesting to us in that environment? The survival instinct is no longer there.

1:25:18 MR: I think we as human beings probably want to stay as human beings. And I think we'd like to feel that the earth won't change very much. That's why I'm happy about all these things happening fast away from the Earth, but preserving the earth as it is, occupied by creatures who are adapted to it, they'll make a better go of it than they're making now.

1:25:41 SC: So in addition to human beings, there would be artificial intelligent electronic beings that we could get spread throughout the galaxy.

1:25:49 MR: Yes, and we hope they'd leave us alone.

1:25:50 SC: Got it.

1:25:51 MR: But I think we would want to try and restrain the speed of these changes, because maybe we can't, because maybe some group will get ahead and we'll have a sort of very disturbed world where human beings can't survive. This is one of the dystopian scenarios obviously. But I think we should hope that things don't change too fast.

1:26:20 SC: Your perspective seems to indicate that the Fermi Paradox might not be a very big paradox, and in fact that the galaxy might be teaming with different kinds of life. Is that fair?

1:26:31 MR: Yes, yes, yes.

1:26:32 SC: And so what does that say to people who are thinking about the origin of life, or the frequency of life? I know that there's certainly an idea that we haven't seen lots of aliens yet, therefore the very beginning of life was a really, really, really unlikely event or the beginning of multi-cellular life was very hard.

1:26:50 MR: Yes. Well of course if you take the Femher seriously, then you could still be happy with the idea that simple life and the biosphere and lots of plans and things could be wide spread. Within 20 years we will know that because we'll have a spectroscopy of planets around nearby stars which will be sufficient to tell us if there's a biosphere there. But even if a biosphere of plants and simple organisms is common on earth-like planets throughout the galaxy, it could still be that there are other bottlenecks which stop at getting to intelligence. I think we can still hope, even if we don't think that intelligent life is widespread, that simple biospheres are widespread, because we don't know what the odds were against the evolution getting as far as it has. If the dinosaurs hadn't been wiped out, would there be an intelligence like us that emerged by a different group, we just don't know.

1:27:56 SC: Well Europa or Titan might have biospheres, right?

1:28:00 SC: Well Titan might, and of course the origin of life is a problem which everyone has known is an important problem, but most people, most serious scientists have put it in this sort of too difficult box and haven't worked on it. Evidence of that is that the Miller-Urey experiment done near 1950s where they put sparks through gas and got amino acids and all that. That experiment was still talked about 40 years later. No one thought of doing much better experiments, whereas now a lot of serious top ranking biologists who are thinking about the origin of life and the stages by which it could happen, as motivated by advances in microbiology, but also of course motivated by realizing that there are other places in the galaxy where it could exist. And so I'm hopeful from having talked to a number of these people, that we will have a plausible scenario for the origin of life within 10 or 20 years. And that will tell us two things. It'll tell us first was it a rare fluke? But it would tell us also in answer to your question, does it depend completely on special chemistry like DNA RNA, does it even depend on water as a solvent, because on Titan of course you'd have to have methane and not water.

1:29:21 MR: I think if we understood the origins of life we would know whether whatever happened at the temperature of the earth with water could have happened on Titan or not. So we will know how rare the origin of life is and what the variety of chemistries is that could lead to it. We don't know that but I think in the next 20 years we'll know about that, and we also might have evidence for whether there is life. And of course telescopes now aren't really powerful enough to be able to detect spectrum of a planet, because an earth-like planet is billions of tiny things and the star it is orbiting and to isolate the spectrum, that is very hard. I think the James Webb Telescope might do something, but after that the best bet is going to be the telescope the Europeans are building which is called The Extremely Large Telescope.

1:30:18 SC: They've run out of good names.

1:30:19 MR: We are unimaginative in nomenclature, but this has a 39-meter mirror which is not one sheet of glass but a mosaic of 800 pieces of glass, and this will collect enough light such that with the right spectograph, it should be able to separate out the light reflected from the planet from the billions time brighter light from the star.

1:30:45 SC: So we can really see the biosphere.

1:30:46 MR: And as the planet's in different phases then you would change that. That would tell us something about the chlorophyll on the planet and things of that kind. So that's something which will be done within 10 or 20 years.

1:30:58 SC: Yeah. Just so everyone knows, the Extremely Large Telescope is a ground-based telescope, right, it's not satellite.

1:31:04 MR: It's ground based, yes, yes. James Webb is in space, and obviously it's an advantage in space, but you can have a much bigger mirror on the ground. But incidentally if you look further ahead then talk about robots, even if we don't send many people into space, we can have robotic fabricators assembling big structures either on the moon or maybe in space, and I think one exciting scenario is fabricating one huge very lightweight mirror under Zero G in space which might even be able to resolve the image of an exoplanet.

1:31:46 SC: Okay.

1:31:47 MR: Not to take the light and see it, but see it as an extended object, not as a point. I think make a point in my book, that the target for this should be 2068, which is a centenary, the famous picture taken by Bill Anders, Earthrise, which was the classic iconic picture of the earth. And it would be nice if after 100 years after that, we had a picture which we could post on our walls of an earth-like planet elsewhere. And it would not be crazy to have by that time, a successor of James Webb telescope, and of the ELT, which could be a huge mirror up in space.

1:32:27 SC: How huge is huge?

1:32:28 MR: Sorry?

1:32:28 SC: How big do you need for the mirror up in space to do this?

1:32:32 MR: I think several hundred meters.

1:32:33 SC: Several hundred meters. Okay, that sounds feasible to me. What do I know? But yeah.

1:32:37 MR: But under 0G, it would not be impossible.

1:32:38 SC: That's right. Okay, you mentioned, just to start winding things up. We can get back to the ethical concerns a little bit, 'cause you did mention terraforming Mars. If human beings wanna go up there and start living there. There are people who will say that we shouldn't do that, 'cause this is a whole part of nature, and what do you think?

1:33:00 MR: Well I think, if there were evidence of life there, then I think most people would feel we ought to preserve that life, like a national park, as it were. If there's no life at all, then I think we could be somewhat more relaxed about it. Similarly like the moon. I think people talk about mining the moon. In fact, Harrison Schmitt has this idea of really doing a huge amount of open cast mining on the moon to find helium three. And I think, economically, that doesn't make much sense, and probably never will, but on the other hand, the question is, would we be relaxed about it, and I personally would be fairly relaxed about getting materials from the moon in order to build structures on the moon, preserving a few historic sites, like where the Apollo astronauts landed and things of that kind.

1:33:56 MR: But if there were a biosphere already, then I think most people would feel that one should try and preserve it on Mars, or indeed, under the ice of Enceladus or Europa, which of course, are now thought to be perhaps the most likely places for life in our solar system. And of course, there are probes already being planned to fly through the spray that's coming up, through cracks in the ice there to be able to do some analysis to see if there's complex chemistry there, and of course looking further ahead to send some submarine to go down and see if there's something swimming around under there.

1:34:37 SC: So Mindscape listeners should know we will be talking about exactly that, yeah, that's a very exciting upcoming thing.

1:34:40 MR: But I think the reason that's important isn't... Well, not there are observer, important exploration, but of course, if we could find evidence that earth... Life had originated twice independently within our solar system, that would tell us straight away that life of some kind was widespread in billion places in our galaxy.

1:35:05 SC: It would be one of the most important discoveries of all time.

1:35:07 MR: It would, it would indeed. But it's got to be convincingly independent. And that's why some people would say, "Life on Mars would not be quite so convincing," because people have said maybe we're all Martians. Maybe life started on Mars, and then meteorites brought it to the earth.

1:35:24 SC: Or vice versa.

1:35:25 MR: Yes, whereas if it was on Europa, I think you couldn't make that argument would be independent. And of course, if it was quite different chemistries.

1:35:34 SC: The chemistry was very different, that's right, yeah.

1:35:35 MR: And that also would show independence. And so that's why I think searches for even rather boring primitive life anywhere in our solar system are very important, because it would have such huge implications for the prevalence of life throughout the entire galaxy.

1:35:51 SC: You've made bets before. What would be your bet about both the existence of life elsewhere and our likelihood of finding it?

1:36:00 MR: Well, I think I would bet substantial odds on finding evidence somewhere of life, whether it will be in Europa or Enceladus, moons of Saturn or Jupiter, or whether it will be from observations of exoplanets like the earth. And of course, we shouldn't just look at things like the earth. We should, I think, look at other planets in other places. So I think it's not a crazy expectation, whereas the idea of finding intelligent life or artifacts, which are evidence that there was some sort of intelligent life. Obviously, I'm in favor of pursuing searches at a modest level, because it's so fascinating to everyone. If people know I'm an astronomer, then the first question they ask is, "Are we alone?" etcetra. They all ask that question, and they want to talk about it. If you don't want to talk to your neighbor on a plane, you say you're a mathematician.

1:37:07 SC: R a physicist. Physicist works.

1:37:08 MR: Yes, but even though I favor and support modest searches for intelligent signals of some kind or artifacts, I wouldn't bet very much on that. But it's so important, it is worth the search.

1:37:23 SC: Yeah, it's a good point. I'll close by mentioning, I don't know if I've ever told you this, but some time ago, roughly 20 years ago, you gave a public lecture in Chicago and I was in the audience. And so I was already a cosmologist, physicist, giving lectures myself. But you were talking about cosmology and the state of cosmology, and in the middle of the talk, you started talking about the possibility of life on other planets in other solar systems, and as a professional scientist of course, I was appalled by this. I'm like, "That's not cosmology at all."

[laughter]

1:37:55 SC: But I realized, of course, like you just said, that is what people care about. And upon reflection, I gathered a important lesson from that, which is that it's okay to talk about what people care about and not just what you as a professional are supposed to be talking about.

1:38:12 MR: Yeah, yeah, yes, yes but I think it's more than that. I think there are some subjects which are so far from any kind of empirical test, that it's not worth discussing them, and that should be left in the too difficult box and left alone. And that may have been true in the past of many of these issues, but I think exobiology is now a serious subject. And so I think no one would regard it as a crazy, flaky, cranky subject, as they might have done in the past.

1:38:44 SC: That's right, and the very practice of contemplating the future seems to be a very different thing now than it was 50 years ago. So it's interesting to see how our perspective has changed.

1:38:55 MR: It has, but I think that when I'm asked is there any special perspective that I bring to these issues of the earth's future, because I'm an astronomer and cosmologist, it is simply an awareness of the long-term future, because most people, unless they live in Kentucky or some Muslim areas, are happy with the idea we're the outcome of four billion years of evolution. But nonetheless, very many think we are the top of the tree, we're the culmination of the evolution.

1:39:24 SC: The culmination.

1:39:24 MR: And I think that's something which no astronomer could believe, being aware of the huge ranges of future time and extended space.

1:39:33 SC: It's a very good perspective to keep in mind. Martin Rees, thanks so much for being on the podcast.

1:39:37 MR: Thank you very much.

[music]

4 thoughts on “86 | Martin Rees on Threats to Humanity, Prospects for Posthumanity, and Life in the Universe”

  1. I’m not sure I’m comforted by the thought there are 10% of the nuclear weapons there were ready for launch at the height of the Cold War, that number is surely still enough for a civilization ending nuclear exchange.

  2. Excelente!
    Frustrada, quando leitura vosso diálogo terminou!
    Obrigada Sean Carroll por nos proporcionar episódios tão interessantes, especialmente, este.
    Obrigada, Sr Martin Rees. É incrível!

  3. “I’m not sure I’m comforted by the thought there are 10% of the nuclear weapons there were ready for launch at the height of the Cold War, that number is surely still enough for a civilization ending nuclear exchange.”

    No, that would not end civilization, although it would be a very bad hair day on planet Earth. The general public greatly exaggerates the destructive force of nuclear weapons but of course they do level cities. (A statistician who studies these things backed this up last year on an econ blog.) There would be a very small percentage of deaths due to radiation since the effects from the blast are strong where the radiation is also the strongest. Nature magazine had an article summarizing studies on nuclear winter over the past few decades and the trend has been toward a weak nuclear winter or none at all. Nuclear war, climate change and asteroids (none big enough are around for at least the next 100,000 years can’t end civilization. It would have to be something as yet unknown. The end due to an explosion in paperclips would be pretty wild…

Comments are closed.

Scroll to Top