Enjoyed the episode? Want to listen later? Subscribe by searching 80,000 Hours wherever you get your podcasts, or click one of the buttons below:

Update April 2019: The key theory Dr Sandberg puts forward for why aliens may delay their activities has been strongly disputed in a new paper, which claims it is based on an incorrect understanding of the physics of computation.

It seems tremendously wasteful to have stars shining. When you think about the sheer amount of energy they’re releasing, that seems like it’s a total waste. Except that it’s about 0.5 percent of the mass energy that gets converted into light and heat. The rest is just getting into heavy nuclei. If you can convert mass into energy, you might actually not care too much about stopping stars. If the process of turning off stars is more costly than 0.5% of the total mass energy, then you will not be doing it.

Anders Sandberg

The universe is so vast, yet we don’t see any alien civilizations. If they exist, where are they? Oxford University’s Anders Sandberg has an original answer: they’re ‘sleeping’, and for a very compelling reason.

Because of the thermodynamics of computation, the colder it gets, the more computations you can do. The universe is getting exponentially colder as it expands, and as the universe cools, one Joule of energy gets worth more and more. If they wait long enough this can become a 10,000,000,000,000,000,000,000,000,000,000x gain. So, if a civilization wanted to maximize its ability to perform computations – its best option might be to lie in wait for trillions of years.

Why would a civilization want to maximise the number of computations they can do? Because conscious minds are probably generated by computation, so doing twice as many computations is like living twice as long, in subjective time. Waiting will allow them to generate vastly more science, art, pleasure, or almost anything else they are likely to care about.

But there’s no point waking up to find another civilization has taken over and used up the universe’s energy. So they’ll need some sort of monitoring to protect their resources from potential competitors like us.

It’s plausible that this civilization would want to keep the universe’s matter concentrated, so that each part would be in reach of the other parts, even after the universe’s expansion. But that would mean changing the trajectory of galaxies during this dormant period. That we don’t see anything like that makes it more likely that these aliens have local outposts throughout the universe, and we wouldn’t notice them until we broke their rules. But breaking their rules might be our last action as a species.

This ‘aestivation hypothesis’ is the invention of Dr Sandberg, a Senior Research Fellow at the Future of Humanity Institute at Oxford University, where he looks at low-probability, high-impact risks, predicting the capabilities of future technologies and very long-range futures for humanity.

In this incredibly fun conversation we cover this and other possible explanations to the Fermi paradox, as well as questions like:

  • Should we want optimists or pessimists working on our most important problems?
  • How should we reason about low probability, high impact risks?
  • Would a galactic civilization want to stop the stars from burning?
  • What would be the best strategy for exploring and colonising the universe?
  • How can you stay coordinated when you’re spread across different galaxies?
  • What should humanity decide to do with its future?

If you enjoy this episode, make sure to check out part two where we talk to Anders about dictators living forever, the annual risk of nuclear war, solar flares, and more.

The 80,000 Hours podcast is produced by Keiran Harris.


The basic question that made us interested in the Fermi paradox in the first place is, does the silence of the sky foretell our doom? We really wonder if the evidence that the universe seems to be pretty devoid of intelligent life is a sign that our future is in danger, that there is some bad things ahead for us. One way of reasoning about this is the great filter idea from Robin Hanson. There has to be some step that is unlikely from going from inanimate matter to life, to intelligence, to some intelligence that makes a fuss that you can observe over astronomical distances. One of these probabilities of transition must be very, very low, otherwise the universe would be full of aliens making parking lots on the moon and putting up adverts on the Andromeda galaxy.

It would be very obvious if we lived in that kind of universe, so you can say, “Well, it’s obvious we’re alone. The probability of life might be super low, or maybe it’s that life is easy by intelligence is rare.” In that case, we are lucky and we’re fairly alone, which might be a bit sad, but it also means we’re responsible for the rest of the universe and the silence in the sky doesn’t actually say anything bad. The problem is, of course, that it also might be that intelligence is actually fairly common, but it doesn’t survive; there is something very dangerous about being an intelligence species. You tend to wipe yourself out or become something inert. Maybe all civilizations quickly discover something World of Warcraft or other games and succumb to that. Or, some other, more subtle convergence threat. Except that many of these explanations of what that bad and dangerous thing is are very strange explanations.

It’s an interesting question when we know that certain boxes shouldn’t be opened. Sometimes we can have a priori understandings, but this research field, whatever the effects it has tend to be local. So if we open that box and there were bad stuff, there is also going to be local disasters. That might be much more acceptable than some other fields where when you have effects, they tend to be global. If something bad exists in the box, it might affect an entire world. This is, for example, why I think we should be careful about technologies that produce something self-replicating. Whether that is computer viruses or biological organisms, or artificial intelligence that can copy themselves. Or maybe even memes and ideas that can spread from mind to mind.

We want to avoid existential risk that could mean that we would never get this grand future. We might want to avoid doing stupid things that limit our future. We might want to avoid doing things that create enormous suffering of disvalue in these futures. So, what I’ve been talking about here is kind of our understanding about how big the future is, and then that leads to questions like, “What do we need to figure out right now to get it?” Some things are obvious, like reducing existential risk. Making sure we survive and thrive. Making sure we have an open future.

Some of it might be more subtle, like how do we coordinate once we start spreading out very far? Right now, we are within one seventh of a second away from each other. All humans are on the same planet or just above it. That’s not going to be true forever. Eventually, we are going to be dispersed so much that you can’t coordinate, and we might want to figure out some things that should be true for all our descendants.


Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, the show about the world’s most pressing problems and how you can use your career to solve them. I’m Rob Wiblin, Director of Research at 80,000 Hours.

My interview with Anders Sandberg was so entertaining I decided to split it into two episodes, which each cover pretty different themes.

This is the first part and it’s focussed on a range of possible solutions to the Fermi Paradox, and how the universe could actually end up being colonised. We go well beyond the usual introductions you may have heard before.

If you enjoy the episode, please share it on social media and tell your friends they should subscribe to the show!

Without further ado I bring you Anders Sandberg.

Robert Wiblin: Today, I’m speaking with Dr Anders Sandberg. Anders is a senior research fellow at the Oxford University’s Future of Humanity Institute, where he looks at low-probability, high-impact risks, estimating the capabilities of future technologies and very long-range futures for humanity. Topics of particular interest include global catastrophic risks, cognitive biases, and cognitive enhancement. Anders has a background in computer science, neuroscience, and medical engineering, and he got his PhD in computational neuroscience at Stockholm University in Sweden for work on a neural network modeling of human memory. Thanks for coming on the podcast, Anders.

Anders Sandberg: Thank you for having me here.

Robert Wiblin: We’re going to dive into a bunch of things you’ve studied while at the Future of Humanity Institute, which range from being a bit out there to being extremely out there. But first, how do you decide what to research and work on?

Anders Sandberg: Normally, I try to figure out what the most important thing I could be doing is, and I do that. Except, of course, that in practice, this is not how it works. So while I’m trying to figure out what truly matters, in practice, I tend to go for the most interesting, shiny problem. But when I do constructive procrastination, I have ten projects going on at the same time. In order to work on one of them, I will work on some other projects, because I can’t bring myself to do that. And then, of course, I get a bit frustrated with them, so I move to another one.

I’m working on a rather broad front with many things at the same time. This might not be the most effective way of working, but it certainly allows a lot of very interesting cross-fertilization.

Robert Wiblin: Yeah. I think you’re one of the people with the widest interests of almost anyone that I know, and the widest number of different strange things that you’ve studied over the course of your career. And also, I guess, one of the happiest people, as well. Do you think that you’re as cheerful as you seem to be?

Anders Sandberg: I think so. It’s of course very hard to judge your own subjective happiness, but I certainly feel very bubbly, which is actually not always a good thing, because I’m also rather content with whatever situation there is. Yes, there are a lot of frustrations, but I can always say, “Yeah, it’s not that bad.” Sometimes, having a low dynamic range of your mood might be a bit troublesome.

Robert Wiblin: Yeah, interesting. Do you think, on balance, we need people who are unhappy to be agitators to fix problems?

Anders Sandberg: I think you need to have the right kind of mixture. On the one hand, being an optimist means that you have a lot of biases that will make you wrong about many things; you’re still not going to be very sad when you find out you were wrong about your expectations. The pessimist, will of course, find himself right a lot of times, but will not become happier. But together, of course, the optimist and pessimist are actually a pretty good team. The optimist will try new things, will suggest that this can be done and try his or her hand at it; the pessimist will point out what’s problematic, what can’t be done. If you get the right dynamic, this is way more effective than having a single person. Of course, most of us have a mixture of optimist and pessimist in our head; we’re switching between the different moods. But sometimes it’s also useful to specialize.

Robert Wiblin: It seems to me like people who are really depressed might be more aware of their problems, but also less motivated to solve them because they don’t expect to succeed at doing so. But I guess there are some people who have a negative outlook but are extremely animated as a result of that, because they’re frustrated by some injustice, so they really wanna ride it. While they have a negative outlook, they’re not necessarily pessimistic about fixing things. Do you agree with that kind of distinction?

Anders Sandberg: I do. Similarly, as an optimist, you can be a very complacent optimist, saying, “Oh, everything is just getting better. I’m just gonna wait until the singularity arrives and everything will be fine,” or a dynamic optimist saying, “Oh, the future could be wonderful. It really looks promising. We need to safeguard it and do things to get there.” Maybe because you just want to get there as quickly as possible, but also it might be because you realize that there is various problems that need to be solved before we get to the glorious future.

Robert Wiblin: Listeners might very well have picked up that you’re Swedish. Nick Bostrom is Swedish as well, right?

Anders Sandberg: Oh, yeah.

Robert Wiblin: The founder of the Future of Humanity Institute. Are there any other Swedes at FHI, or is it just you two?

Anders Sandberg: Not strictly speaking right now, but we certainly have Swedes that have been visiting. There are people in the building and working effective altruists that are from Sweden, and there is a shocking number of Swedes interested in our kind of questions. Max Tegmark might be the most famous one. It’s interesting, because there’s a lot of Swedes, but most of them are not in Sweden.

Robert Wiblin: Yeah. Why do you think so many Swedes are involved? Did you know Bostrom before you came to FHI, or is that just a coincidence?

Anders Sandberg: I knew him before. We had been emailing and interacting in the world of transhumanists. Then he pointed out that he had started an institute and was recruiting people, and I managed to get the job. I think we didn’t have that strong links. It was mostly a coincidence that we happened to be Swedes. I think the deeper root of the ubiquity of Swedes, because, let’s face it, Sweden is not a large country, it’s ten million people, is that it has good education, a fairly consequentialist mindset. Playing the trolley problem game with my niece and nephew was very amusing, because it turns out that even small children immediately get this idea about maximizing the number of saved persons. Except, of course, my youngest nephew, who tried to maximize the number of people run over. But maximizing is important to Swedes.

That, of course, leads to a particular mindset. But the problem in Sweden is that being entrepreneurial, changing the rules, that’s not really done. If you like doing that, then you can seek your fortune outside.

Robert Wiblin: Interesting. You get the education necessary to change things, but then if you actually wanna do it, you have to come to Oxford. I guess there’s a slightly similar phenomenon with Australians. Most listeners will know that I’m Australian; they’ll be able to tell that pretty fast. There’s a lot of Australians involved in effective altruism, but most of them aren’t in Australia. I wonder whether it’s because people are more willing to leave Australia to pursue their fortune and their career than people are willing to leave the UK or the United States.

Anders Sandberg: That could be another reason, just that feeling that, “Okay, there is an outside world and it’s okay to go out there and explore.”

Robert Wiblin: Yeah. I guess if you’re an Australian, Australia feels quite isolated. It is very small. it’s only 20 million people. A bit bigger than Sweden, but not that much. Just so many of my friends have gone overseas, but those who are really ambitious about pursuing their career are almost all considering leaving at some point.

Anders Sandberg: Mm-hmm (affirmative).

Robert Wiblin: Let’s dive into a couple of the papers that you have published over the last few years. I’ve been looking at your page on the Future of Humanity Institute’s website. I’ll stick up links to all of the things that we discuss here.

The first and perhaps most remarkable, is you and Toby Ord recently published a paper where you claimed to basically have dissolved the Fermi paradox, which is a paradox that a lot of listeners will have heard of. What’s the story there?

Anders Sandberg: Well, the basic question that made us interested in the Fermi paradox in the first place is, does the silence of the sky foretell our doom? We really wonder if the evidence that the universe seems to be pretty devoid of intelligent life is a sign that our future is in danger, that there is some bad things ahead for us. One way of reasoning about this is the great filter idea from Robin Hanson. There has to be some step that is unlikely from going from inanimate matter to life, to intelligence, to some intelligence that makes a fuss that you can observe over astronomical distances. One of these probabilities of transition must be very, very low, otherwise the universe would be full of aliens making parking lots on the moon and putting up adverts on the Andromeda galaxy.

Robert Wiblin: It would be like Rick and Morty.

Anders Sandberg: Exactly. It would be very obvious if we lived in that kind of universe, so you can say, “Well, it’s obvious we’re alone. The probability of life might be super low, or maybe it’s that life is easy by intelligence is rare.” In that case, we are lucky and we’re fairly alone, which might be a bit sad, but it also means we’re responsible for the rest of the universe and the silence in the sky doesn’t actually say anything bad. The problem is, of course, that it also might be that intelligence is actually fairly common, but it doesn’t survive; there is something very dangerous about being an intelligence species. You tend to wipe yourself out or become something inert. Maybe all civilizations quickly discover something World of Warcraft or other games and succumb to that. Or, some other, more subtle convergence threat. Except that many of these explanations of what that bad and dangerous thing is are very strange explanations. So we started thinking about the Fermi paradox and we tried to see, did people do some assumption here that was wrong or weird.

When you start thinking about the Fermi paradox, basically what you do is something like the Drake equation. You say, “The universe is really, really big. There is a lot of sites wherein life and intelligence could have emerged, and a lot of time for them to do that.” Then you multiply that with some probability of intelligence, of a technological civilization emerge. Typically people then say, “Yeah, you have a very big number, you multiply with a small number.” But that first number is so literally astronomically big that you ought to get a lot of civilizations. We don’t see them, and that’s weird.

Robert Wiblin: What is, roughly, the number of stars, say, in the observable universe?

Anders Sandberg: We have about 300 billion stars in the Milky Way alone, and then you have, in the observable universe, many hundreds of billions of galaxies. We’re literally talking about Carl Sagan level of billions of billions, and …

Robert Wiblin: So, it’s what, ten to the 20?

Anders Sandberg: Something like that.

Robert Wiblin: Yeah. Okay. It’s a very large number, but we don’t see any life out there. If you had an intergalactic civilization, you might be able to observe them, for example, blocking out the light coming from stars, because they would be using it as an enormous solar power plant, and you’d see all of this mass, all of these stars out there, but you couldn’t see the light coming from them. We might have a shot at picking up if there was life even very far away, but we don’t see that.

Anders Sandberg: Yeah. There has been an interesting survey called [jay hat 00:09:56] that actually looked at different galaxies and their energy emissions, and tried to see are there any galaxies that look like something is absorbing a lot of energy? While they found some peculiar galaxies, nothing really seemed to fit the bill. We know that, at least, out of the 500,000 galaxies they watched, none of them had this kind of super civilization.

Robert Wiblin: Okay, we’re trying to explain how is that? What’s the problem that people find, why is this been something that people have been scratching their heads about for 60, 70 years?

Anders Sandberg: Our answer is people have been looking at point estimates and not taking uncertainty into account in the right way. At this point many people in the field would protest and say, “Hey, we really know what we are uncertain. You are really being unfair to us.” If I caricature the typical argument people give, it’s something like they discuss the different factors and say, “Yeah, that one has roughly this value. The probability of life, we don’t know but maybe we can make a guess. A very uncertain guess and one in a million, or one in a hundred.”

Then you get to the end of a calculation and then you have a number and you say, “Well this is, of course, based on my guesstimates and many of them are really uncertain. This number should be taken with an enormous pinch of salt.” However, it ends up in the ballpark of something. There is an internal joke in the SETI community that we in the two schools. One school that tends to think that, “Yeah, there should be thousands or millions of civilizations in the milky way so we should expect to be able to communicate with galactic club.” The other ones that think that, “No, our calculation ends up with roughly one and that’s us, so we’re alone in the milky way.”

Now, what went wrong in this calculation? Well, the handling of uncertainty was actually bad. When I’m claiming that I’m a bit uncertain of how many inhabitants live in the great London region, that doesn’t mean that if I say 10 million and I’m uncertain about it, that, that was a good estimate. I could say, “I have a confidence interval between one million and 30 million.” Now, I’m very confident that the true value is somewhere in there and then I can do other calculations where I make use of its confidence intervals.

As I learn more, these intervals get smaller. For example, the number of stars in the milky way, we are somewhat uncertain about it but we cannot do a lot of observations and that number will converge rather nicely. We’re really uncertain about the probability of life emerging on earth-like planet. I will get back to that a bit later. The thing is, you want to multiply together these uncertainties and end up with a distribution of uncertainty rather than a number because if you just take individual numbers with a point estimate, you will typically get an impression that you have a central value. A typical value that’s very different from what the distribution actually will have.

Robert Wiblin: Does the problem appear when you’re multiplying many different things that you’re uncertain about together?

Anders Sandberg: Yes. This is a generic problem. Another reason we’re interested in this is that it applies not just to the question of alien life, but also thinking about risk. For example, if you’re designing a bio lab and you want to know what’s the probability of lab accident causing a pandemic? You can make similar kind of calculation. You can say, “Well, what’s the probability that we’re working on a real pathogenic virus times the probability that we have someone drops a test tube, times the probability of somebody actually contracting that disease and getting outside the lab and spreading it?” You multiply together to get uncertain things.

Some of these ones you have a pretty decent idea about, some of them are just guess work. Together, you get some probability distribution and that can give you a very different answer. If you go back to the alien example, imagine that we have nine factors and they could be between zero and 0.2. If we just take the medium values, 0.1 and multiply them together it would say, “It’s one chance in a billion with the 300 billion stars in the milky way, which would expect 300 civilizations.” The probability seems to be very much against us being all alone.

If you do it more carefully, if you actually multiply together these confidence intervals and the math, of course, gets slightly more messy. Which, is one reason people don’t like doing it, you get the distribution but actually gives us a very high probability to be alone in the galaxy because it could turn out that some of these uncertain permits actually have fairly low values. This is what has gone wrong in a lot of these naïve applications. Now, serious researchers will say, “Yeah, I’m not that stupid. I’m actually thinking carefully about it.” There is a surprising amount of thinking that is strongly predicated on these point estimates.

Now, when you try to do it right you actually find that you get a tail of low probability that is very hard to avoid. Even if we take all the papers people have been publishing about alien life that have the estimates for these parameters for the drake equation. They are, of course, all written by optimists who like writing about our chances of contacting alien life rather than pessimists who think this is all a waste of time. Then you just randomly re-sample them.

You make a made up distribution of what people said in the literature. You end up with about 8% chance that we’re alone in the visible universe. This is just based on the optimist papers. You end up with about, I think, 20 to 30% chance that we’re alone in the milky way. Again, based on these optimistic papers. If you then try to make the estimates of a real uncertainty based on current science based on some of its parameters better, you get even [inaudible 00:15:24] uncertainties. We have, essentially, a hundred orders of magnitude uncertain about how likely life is to emerge on plants.

It could be that spontaneous generation of life is natural. As soon as you have a puddle with the right organic stuff, it will attempt to form. We can’t rule that out. On the other hand it might be that it actually does almost take a thermodynamic miracle. Similarly, whether that life can evolve to intelligence, there are other good reasons to be really uncertain about whether you can get advanced complex life. It could be that life is super common in the universe but most of it is very simple bacteria that have genetic coding systems that are very simple to evolve but are then very rigid so you can’t evolve much more.

We happen to have one that might have just the right level of complexity. This uncertainty when you put that in. Now you get a much, much bigger range. It’s interesting because this whole model. This is all armchair astrobiology. We haven’t left the room. We’re just reading scientific papers and making estimates. Still can be very optimistic. You can’t say that the main number of civilizations we make might be in the hundreds but you can’t avoid getting this tail suggestion that actually some pretty high likelihood that we’re alone.

Now, this leads to an interesting conclusion that an empty sky is not that surprising. We shouldn’t be shocked. It’s not much of a paradox, it’s just an observation.

Robert Wiblin: When you consider the fact that you can’t rule out the possibility that some of these factors that go into the drake equation, like how often does life spontaneously appear from non-life or how often does intelligence evolve from bacteria. You can’t rule out the fact that some of these factors might be basically zero. Like extremely close to zero and as a result it’s not that surprising if the number of expected intelligent life in the universe is less than one. Even if you’re saying the average number of instances of intelligent intergalactic civilization that we expect might be 100 or 1,000 or 10,000 it could also be very easily one or zero.

Anders Sandberg: Yes, exactly.

Robert Wiblin: I guess we look up at the sky and we can see that it’s probably not 10,000, it’s probably not 100,000, or we would be observing some of them. Now, we have quite a bit of evidence to say, “Well, before we thought the probability of us being alone in the universe was 10% and now when we look up in the sky maybe it’s 30, or 40, or 50%.”

Anders Sandberg: That’s also true and that also leads to the really good news that comes out of all this mathematical treatment. The first half showing that the Fermi paradox isn’t terribly paradoxical. That’s kind of nice. When we actually look outside of the library we see that there are no UFO’s miss-parked on the street below. We haven’t seen any super civilization in nearby galaxies. There doesn’t seem to be too many crashed alien spacecraft on the moon and so on. That gives us an update that rules out the Rick and Morty universe full of aliens.

It’s a very weak update. We haven’t been observing that long. We haven’t been observing that careful, but we can say, “Yeah, there can’t be more than the one civilization per solar system,” at the very least. You cut off that extreme tail. Now you can do the math and go back and see how does that update parameters going in? The interesting part here was the least uncertain parameters don’t change very much. The rate of star formation of a number of stars in the milky way don’t get changed by the fact that I don’t see a UFO parked on the street outside. Which is as it should be, otherwise our theory would be rather crazy.

However, the really uncertain parameters get affected a lot. Now, which are the really uncertain parameters? Well that seems to be a probability of getting life and the probability of life having the capacity to evolve into an intelligence. Here, we have hundreds of orders of magnitude of uncertainty. They moves several order of magnitude and size just by this very weak observation that there are no really nearby aliens. Now, the drake equation also has that famous or infamous last term that the lifespan of a civilization. The average time a civilization remains able to communicate.

That’s, of course, the real reason we care about this. We want to know our own future. It’s very uncertain. We know that it must be more than a few decades because we have survived that long. We don’t seem to be super lucky. It had to be less than 10 billion years because otherwise the drake equation doesn’t function. It’s build on assumption instead of the state universe and there is some technical equivalents here. That means we have about seven orders of magnitude of uncertainty. That’s much less than the uncertainty about life. Our observation that we don’t seem to be living in this Star Wars or Rick and Morty universe full of aliens means that we become slightly more pessimistic about the life spans of civilisations but not that much. But we move a lot of things about the probability of life. Now getting back to that thinking about the Great Filter. This suggests that the Great Filter is an early one. But the reason we don’t see a lot of the universe inhabited by intelligence is that intelligence is actually rare rather than it will be wiped out by something scary.

Robert Wiblin: So we observe that there’s not much life in the universe as far as we can tell. And we’re worried is the thing that causes that to be the case, risks that we face going forward from where we are now, or is it risks that we’ve already managed to cross in the past like the development of intelligence or industrial mechanization, that kind of thing. And you’re saying we should mostly update in favor of it having been a filter that we’ve already passed rather than a filter that we haven’t yet passed?

Anders Sandberg: Yes.

Robert Wiblin: Why is that?

Anders Sandberg: So this has to do with where the uncertainties … And we have so much uncertainty about life and intelligence. So you can imagine this whole argument that’s a series of springs of different stiffness that you put together and you stretch or compress. The springs that are the most floppy will move the most, while the stiff ones, the ones we are most certain about, will not change much. So this is a statistical argument for why the Great Filter is early and why the stars are not foretelling.

You might say, “Okay. This is all statistics. This actually doesn’t give us that much actionable information.” And that’s true. This is still very much based on a logical mathematical reasoning about it. But it seems to be a new way of going about it. Actually applying Bayesian thinking to these certain existential questions. And of course, if we discover life, or if we discover something about how likely life is to emerge, we learn important things, which are much more powerful than my argument. Evidence will always beat any amount of whiteboard calculations.

Robert Wiblin: Is that true? I mean, sometimes theoretical calculations can be very compelling.

Anders Sandberg: I think theoretical calculations have a lot of strength. The problem is, you need to do them really right. When the calculation actually is totally correct, it has the entire force of the world of logic behind it, that’s a titanic force that can really drive enormously long shades of reasoning very well. The problem is when you are standing there at the whiteboard with your pen, you’re likely to make mistakes. You might overlook possibilities. The probability of a human doing the right action in a situation is typically less than 99%, even for the simplest operations. So now if you have a long argument, the probability of there being some slight error somewhere is almost 1. So this means that even though I might have a compelling argument that is rhetorically powerful and seems that every step has good logic, there might be flaws in it. If it’s a very short one, the risk is of course smaller.

So this is why I’m always a bit skeptical about theoretical arguments when they come alone. You want to have several independent arguments that overlap and show the same conclusion. So if one of them is broken, you can still be fairly certain that the probability of the next one being broken is not very big.

Robert Wiblin: Yeah, that’s a really interesting way of putting it. I suppose it is my experience that sometimes I’ll have a lot of reasoning that I just feel extremely confident about internally, but then when someone else checks it, or when I actually write it out clearly, I realize that I’m wrong. And that happens all the time. And I also think in general that humans are not so great at reasoning. We have like systematic blind spots, potentially, where arguments can seem compelling, when in fact they’re not because there’s like some something that we’re not very good at thinking about.

For example, with this Fermi paper. You can imagine another civilization where they’re very good at thinking about ranges of uncertainty, and that’s always how they do it, and they appreciate this fact that taking point estimates, like they instinctively understand that taking point estimates creates a distorted picture once you multiply things together. But to humans that’s not nearly so obvious. And so we can go about having long discussions that are all based on this misconception at the core of how we’re trying to analyze the problem.

Anders Sandberg: Yeah. And this is a really interesting problem in dealing with low probability high impact risks. So I have a paper that I’m pretty happy with, together with Toby Ord a few years back, where we looked at the debate surrounding the large Hadron Collider. People were concerned that maybe it could create black holes or strange vats of vacuum decay that could destroy the world. And eventually we physicists were annoyed enough by lawsuits and other things to actually write a few papers trying to show that this was totally safe.

The problem here is of course how certain can you be about an argument about safety when we’re talking about something that is on the cutting edge of physics. Some of the arguments physicists gave were really nice. Like, Earth has been hit by cosmic rays for billions of years, so given that we should expect our little activity to be much less likely to cause anything harmful, and we’re still here, so hence nothing bad could have happened. When you think about that argument, it might be compelling, but you also realize that if Earth had been destroyed, we wouldn’t be around, so you have an observer selection effect. So you can quickly patch for it by saying, “Yeah, look at the moon. If the moon had been destroyed, we would still be here, and we would be noting a global strange math or a black hole orbiting us, and we would know something. But the moon is shining, everything is fine.”

The problem is, you need to patch for argument in quite a lot of ways, and now it becomes a fairly complex argument. So ideally, in order to prove that your understanding of the risk is really good, you want independent arguments. You want a cosmic ray argument. You want other statistical arguments about, for example, supernovas and other process in the universe that are based on other lines of reasoning. And together, even if each of our arguments is fallible and might be, just have a 1% chance of being wrong, if you have 20 of them, now you can be very certain that, yeah, there is no problem here.

This is of course very, very different from how we normally like doing arguments in academe and all. We want a beautiful argument. We want a very compelling elegant line of reasoning and nothing else on the page. Having 20 different arguments, some of them really awkward, some of them really messy, some of them are using totally empirical stuff, that’s not how you get your paper or book published. It doesn’t look good. But it’s much more robust. It’s something you can trust. And so this is also how we should probably try to deal with the most problematic risks. We want to have independent groups evaluate them.

Robert Wiblin: Just to explain why this mattered so much in the case of the Hadron Collider … Imagine, the scientists who were saying that it’s definitely safe or that there’s only a 1 in like a billion billion billion chance of it destroying us. Basically, that is correct if their analysis is correct. So if their analysis is correct, the probability is zero, basically. But what are the chances that their analysis is correct? Would that be 99% confidence that it’s correct? That’s almost certainly far too high, given all the reasons that you’ve given, but even if there’s a 1% chance that it’s false, and in that case we don’t know whether it’s dangerous, we don’t know whether it’s unprecedented or not, that 1% times by whatever the likelihood is in that case, is gonna be much higher than 1 in a billion billion billion, which is the probability if their analysis is right. So basically, in a case where your analysis says that a risk is extremely unlikely, almost all of the risk arises in the case where your analysis has been wrong, which might actually be quite likely.

Anders Sandberg: Exactly. And we see this outside the physics and existential risk. A classic example is finance. Long-Term Capital Management estimated that the risk of them going bankrupt, that would be a ten sigma event. It would essentially never happen in the history of the universe. A few months later, they went bankrupt, because their model was wrong. When our models of reality are wrong, we can’t use them to make an accurate assessment of a risk. This is something I’ve been actually working with in an industry collaboration with the reinsurance industry, because they’re a bit worried about their catastrophe models. They are not perfect, and they’re very aware of that, but we’re also, the same models more or less that everybody else are using. So all the companies are making decisions that are correlated. So if the models are really badly wrong about something, it might mean that an entire industry will be mis-pricing or taking risks we shouldn’t take on a global scale. So we have been working with to try to figure out ways around this systemic risk.

Robert Wiblin: Mm. I guess another example of this that occurs to me is the Future of Humanity Institute has looked at the likelihood of a pandemic causing human extinction, killing everyone. And my understanding is that your conclusion was that it’s reasonably likely that a pandemic could kill a lot of people, could kill a billion people, possibly even six billion people. But the probability of a pandemic killing all humans is extremely low, because it’s just so hard for a disease to spread to everyone. And also for no one to have resistance to it naturally. Is that right?

Anders Sandberg: Yup, yup.

Robert Wiblin: But of course, you’ve gotta worry about this issue here. So the probability of human extinction caused by a pandemic is very low if your analysis is correct. But this is a very messy analysis that’s just based on part of your judgment, your understanding of what is possible with viruses and what is not, and you’re not sure about that at all. So in fact, most of the risk now probably comes down to the scenario in which it turns out that you had a deep misunderstanding about this problem, and your analysis was severely flawed.

Anders Sandberg: Exactly. And the interesting problem here is that it can be flawed in two directions. It could be that we’re too pessimistic. Actually, there might be an upper limit of pandemic size, which is just a few hundred million people. In that case, the error doesn’t do much harm.

Robert Wiblin: It just goes to zero again.

Anders Sandberg: Yeah. But if our error is that actually there is some way pandemics that, when they get really large, have extra bad effects. In that case of course, that uncertainty, means that we badly underestimate risk. And generally when dealing with risk, the conservative thing is to assume that the risk is bigger than your result. If you have an argument that the risk is absolutely zero, then it better be an extremely strong argument, and typically you can never make it that strong.

Robert Wiblin: So again, in the case of the large Hadron Collider, I recall that, even though you thought people were overestimating how safe it was, you weren’t against using it, because you thought the likelihood of the research from the large Hadron Collider preventing human extinction, was in fact higher than the risk of using it, causing human extinction. Is that right?

Anders Sandberg: Yeah. I basically agree with most of the safety arguments, it’s just that the methodology was a bit flawed. If we imagine people building Hadron Collider 2 plus, well, in that case we should do it in a different way, if they were to listen to me. But on the other hand, even some small risks are worth taking because they might actually help us understand the world better and make us survive much better in the world. Now, that trade-off is sometimes rather tricky. So, in the case of pandemics, we have this issue of gain of function research where people are actually in the lab making pandemic influenzas, and that has caused a lot of concern that might lead to lab accidents and releasing a pandemic.

The reason people do this research is, of course, they want to prevent pandemics. So, in some sense, if you do it in the right way, you do something risky in order to reduce risk much more. The battle over the gain of function world is of course, is this research actually doing something useful? Now, when it comes to understanding physics, it’s an interesting question. How much has that changed our overall existential risk and how much does that improve our future?

Robert Wiblin: I guess you could argue the discovery of nuclear technology, the ability to split the atom, increased the risk. So maybe we don’t want more fundamental research and physics. Maybe we just want to keep Pandora’s Box closed.

Anders Sandberg: And it’s an interesting question when we know that certain boxes shouldn’t be opened. Sometimes we can have a priori understandings, but this research field, whatever the effects it has tend to be local. So if we open that box and there were bad stuff, there is also going to be local disasters. That might be much more acceptable than some other fields where when you have effects, they tend to be global. If something bad exists in the box, it might affect an entire world. This is, for example, why I think we should be careful about technologies that produce something self-replicating. Whether that is computer viruses or biological organisms, or artificial intelligence that can copy themselves. Or maybe even memes and ideas that can spread from mind to mind.

That self-replication factor is a hint that this needs much more care than something that just sits there, and might maybe blow up locally. So, when it comes to physics, was it a good thing that we discovered nuclear physics? I think maybe the world would have been better if Becquerel had put his piece of pitchblende in the wrong drawer. Now, he put it on photographic plate and he noticed that he got this weird shading because of some unknown radiation for pitchblende, and that led to the whole development of nuclear physics. Imagine the other world where he put the rock in a different drawer.

In that drawer you might have still gotten quantum mechanics, because that was already a big problem among physicists at that time. They would not know anything about atomic nuclear or anything like that, but they would be very happy to deal with electrons. They would perhaps discover quantum mechanics, and a lot of important fundamental physics, including perhaps semiconductors and the things that are useful for computers. But we would get the nuclear and the weapons, and nuclear power much later. That might indeed have been a safer and better world. The problem was we couldn’t make this decision. It’s not like you could advise Becquerel should you put that rock in the drawer with the photographic film or not? That was a pure serendipity.

You couldn’t advise Marie Curie that her research was risky, unless you knew yourself about all the consequences. It was just basic chemistry, trying to understand what was going on. The point where it’s obvious that nuclear power is actually something powerful and dangerous happens about the time Leo Szilard started to come up with chain reaction ideas. By that point it’s kind of totally too late, because a lot of other people were thinking about it. So we can’t control that very much. We can control the kind of experiments we do, and we can certainly try to find safe-making technologies earlier than the risky ones.

I think this is also linked to an important concept, the technology completion conjecture. But in the long-run, every technology more or less that can be done, somebody will do. Even if it’s an obviously stupid idea, somebody will be that guy. Sometimes it might take an extremely long time, but sooner or later somebody’s just going to do it as an art project, or something.

Robert Wiblin: Unless you have some strong central control to prevent that.

Anders Sandberg: Yeah, but …

Robert Wiblin: But setting that aside.

Anders Sandberg: Yeah, well I think it’s an interesting challenge, because the obvious thing is say let’s ban the bad activities, let’s try to impose the right kind of central control. But central control for technology and science has rarely worked very well. Part of it is that if a technology or a form of understanding is really interesting and appealing to people, they will get it, even if it’s illegal. Another problem is that, as I’ve been saying, we can’t predict very well what new ideas arrive. That attempt to control practical technologies have been a very mixed bag. Some technologies have been very limited because of various taxation rules. It’s definitely possible to harm the development of technology by setting rules in the wrong way.

But attempts at banning printing in the west has totally failed, although it somewhat worked in Turkey. China banned travel overseas, which limited its naval power over a long time. The interesting thing is we can speed up beneficial technologies. Quite often we can figure out that biotechnology is going to allow us to make organisms with new properties, maybe we need a good way of mopping up organisms with new properties if we accidentally left them out of the lab. Or if, once we released the gene drive in the wild, we have second thoughts. We don’t really know how to fix that yet, but we can imagine that there should be technology that can fix it, and we should put in effort to discover that before we release the gene drive or the GMOs.

Similarly, when it comes to artificial intelligence there are good arguments why powerful artificial intelligence is dangerous and hard to control, so we should figure out ways of controlling it and aligning its values earlier. Before we have really powerful AI. So I’m very much fond of what I call differential technology development. Think about the potential risks and downsides, and then try to come up with technologies or other fixes. They don’t necessarily have to be a matter of a device, they could be a practice, or a tax, or something else, and get that done before. It’s much more positive too than trying to ban something. People really hate when you prevent them from doing something they want. But people love it when you announce a price for coming up with a better way of safeguarding the ozone layer, or making geoengineering safer. People really like your idea, but we should come up with a better way of making AI safer.

Robert Wiblin: Let’s step back to the Fermi paradox. The core error in people analysis was using these point estimates and multiplying them together. Can you think of any other examples where people do this and it’s leading them astray?

Anders Sandberg: I think it’s fairly common we do this in most risk estimates and security estimates. If you think about these classic calculations about reliability of a nuclear power plant, typically you can imagine a chain of events that lead to a disaster. Most of these steps seem to be fairly unlikely. So you multiply them together and get a really low probability and then you say, “Yeah, it’s just one chance in a million.” You’re of course not certain about these steps, how likely they actually are. Especially after a few years when the engineers think they know what we’re doing, and people are being sloppy, and the contractor have been replacing the pipe with something else. These probabilities become much fussier.

Now we actually should assume, when you try to build the plant, that actually my probability estimate is uncertain, and I actually need to take this into account. I’m going to get this tail event. And sometimes of course bad luck just happens in a very weird, correlated fashion. So there is this concept of normal accidents, because things become linked to each other. When you have a really complex system, things start becoming correlated in ways that you couldn’t expect when you designed it. That also means that the risk of something going wrong suddenly goes way up.

Robert Wiblin: Listeners, take this away as a general rule. If you find yourself trying to estimate something that’s improbable, and you find yourself multiplying different parameters together, and you’re not sure about them, then that’s going to give you a misleading estimate. There’s actually a website where you can try to correct for this called Guesstimate. Have you used that Anders?

Anders Sandberg: Oh yeah, it’s beautiful.

Robert Wiblin: So I’ll put up a link to that, and you should definitely get familiar with it, because we’ve been using it at 80,000 Hours to try to deal with some of these issues. I think as the problem is more severe, the more unlikely is the event, and the more things you’re multiplying together, and the more uncertain is each of those parameters, right?

Anders Sandberg: Yeah. To some extent also modeling it is useful, because it gives you a sense of where your uncertainties are. Not just about values, but also about what’s the structure of the problem, what could go wrong. So one interesting example, is about earthquake insurance in America. So there is this debate about whether there is fault zone under New Madrid in the Mississippi Valley. So there was a really big earthquake a long time ago, haven’t been much ever since. So some people think that oh yeah, that is actually a fault zone, but occasionally has very bad events. Other say no, we don’t think so.

The way the insurance world in their earthquake models are modeling this is making a mixture model between earthquakes going on without the fault zone, and the world where that fault zone exists. We don’t know which world they happen to be in, but they want insurance that makes sense in both of these worlds, so you mix them together. So in this case you have a relatively nice either/or chance. In some cases you get even weirder anti-correlated risks. So if you’re really worried about one possibility, then that actually rules out another bad possibility. So you need something more elaborate, and actually thinking
about this, having others critique your thinking makes it so much more robust.

Robert Wiblin: Let’s move on to another solution that you suggested to the Fermi Paradox, which is the Aestivation Hypothesis. I’d never heard of this word, aestivation? Is that it?

Anders Sandberg: Yeah.

Robert Wiblin: It means hibernation, right?

Anders Sandberg: It’s the exact opposite of hibernation, but means roughly the same thing. So, hibernation, that’s when you’re sleeping through winter. Aestivation is when you sleep through summer.

Robert Wiblin: Mm-hmm (affirmative).

Anders Sandberg: From the Latin for summer, aestas. So the idea here is that maybe there is advanced civilizations in the universe, but they’re all sleeping.

Robert Wiblin: So we’re in summer now in the universe, because the stars are shining. Why would the aliens be hibernating?

Anders Sandberg: It has to do with the thermodynamic cost of doing computation. When you do a computation, you’re essentially moving information around. Well that means that you want to make changes that matter over time, and that has a cost. You’re essentially reducing entropy when you do a calculation. For example, if you have a register in your computer and you erase what was there previously, now you reduced the entropy because now you’re certain about the result. The laws of thermodynamic tell you that, “Yep, you can’t, on average, reduce entropy. You need to pay for it.” So somewhere, there’s going to be a big waste heap coming out of your computer, and this is obviously true for our current computer because they run rather warm. But this is built into the laws of physics. This is something even the most perfect computer can’t avoid. You basically need to pay a cost proportion to the number of bits of information you erase in your computation times the temperature.

So now you will immediately say, “Let’s cool down the computer and run it therein at a very cold temperature. That’s going to fix things.” And if you’re really sophisticated, you can start talking about frontal computers and reversible computing that can get around this in a softer way. But the most important part is error correction. Occasionally bits will just flip because of the radiation or randomness. You need to correct that, and that always has this energy cost. So your computations, they cost something proportional to the surrounding temperature.

Most of what we do in life, we don’t think of it as computation. We say that, “No, I’m having social relations, I’m trading, I’m falling in love. And these are definitely not computations.” But when you look at the deep down, this is actually about changing patterns of information. Maybe it is the change that really matters. But if there is no information change when you fall in love, then there is a real problem.

Robert Wiblin: It’s a bad relationship.

Anders Sandberg: It’s a very bad relationship. It’s also a reversible relation. You can go back, and nobody will notice anything. So it’s probably not very much to talk about.

Robert Wiblin: I guess sometimes you might want to be able to do that, but perhaps you won’t know whether you want to reverse the calculation in any romantic liaison until you’ve already been through it.

Anders Sandberg: Exactly. Evolution is another beautiful example. Evolution is a fundamentally irreversible process where you create various offspring, and they are subjected to selection effects in nature. Some of them are, unfortunately, erased. That is, they get killed, and the survivors continue a spread of their genes. Now, if evolution was reversible, we could unevolve. It would be kind of just a random drift in genotype space, and that would be tremendously slow. The fact is that this is irreversible means that we actually have a bit of momentum in evolution. Now, this means, of course with evolution again creates energy. So we have an energy cost for our activities, regardless of whether that is falling in love, evolving, or running a super civilization a million years since.

So, we can say, “Let’s go to a cold place.” And the coldest place you can get today, Iceland, is of course making a background radiation. Free Kelvin about absolute zero. By human standards, tremendously cold. And where would you run computations of that temperature? You wouldn’t get a lot of stuff done. But wait a minute, the universe is expanding. It’s actually getting colder, which means that, if you wait a bit with your computation you can do much more. In fact, accelerating expansion of the universe means that this background temperature is going down exponentially. So if I have a certain amount of energy right now, if I’m willing to wait long enough I’m going to get an exponentially growing amount of computation done.

Now, for many things we don’t really want to wait any longer. We have a short discomfort. We want to heal the sick today, we want to help the needy, we want to explore the universe and learn of the ropes as a civilization. We really shouldn’t be putting that off onto tomorrow, even though it might be thermodynamically more effective. But once you’re a fairly mature civilization and you’ve done the basic things that are nice to do in the material world, now your discount rate is going to become much lower. You’re going to care much more about the long term future. You have a lower existential risk because you survived and by now you actually know how to handle yourself as a mature civilization. At that point, if you just wait for a while, if you estimate a bit, you get much more computation.

So if you imagine the real advanced civilization that has seen a lot of galaxy expanded long distances, once you’ve seen a hundred elliptical galaxies and a hundred spiral galaxies, how many surprises are we going to be there? Now most of the interesting stuff your civilization is doing is going to be culture, science, philosophy, and all the other internal stuff. The external universe is nice scenery, but you’ve seen much of it. So this leads to this possibility that maybe advanced civilization is actually an estimate. They slow down, they freeze themselves, and wait until a much later era because we get so much more. It turns out that you can calculate how much more they can get. So the background radiation of the universe is declining exponentially.

But there’s a lower limit because of the horizon radiation, which has a temperature of ten to the minus 29 Kelvin, a ridiculously low temperature. But we’re going to be there in about one and a half trillion years. It’s going to take a bit longer because with stars actually heating up the whole place and even when the stars have burned out in a few trillion years, they’re going to be brown dwarfs, those being rather warm. But basically eventually the universe gets down to a cold temperature that will not become any colder. And that is perhaps a natural harvest time. At that point you wake up and you take your resources and actually get going on doing whatever it is that make your super civilization tick. Now, the interesting thing here is of course this might lead to a lot of predictions. It’s a fun story to tell. You can make a very good science fiction story. But you can also start to see, “What would this imply in terms of observable things in the universe?”

So one thing is, if advanced civilization is emerging early do this, they don’t want to lose their resources because they are actually worth a lot. It turns out that if you take the mass energy of Earth today and wait until this very cold far future, and then use it up to do computation, you get about as much computation as you would get today if you used an entire observable universe as raw material for computation. A single planet is worth as much as the entire universe. So you really want to make sure that you don’t lose too much resources. So one thing I was analyzing in this paper was, “Are we seeing an astronomical process that looked like they’re being suppressed so they don’t waste too much mass and energy. And depending on what kind of civilization you’re in, you might also want to care about different things.

Some civilizations want energy. They want to run long computations or a lot of computations. Others want a lot of mass because they want a lot of memory. Some civilizations might need to keep mass together so it doesn’t get dispersed by the expansion of the universe because it’s actually important for all the parks to be together. While other civilizations might just say, “I want to be able to convert all that matter into minds that are having blissful experiences of the maximal possible rate. And we don’t care if one blissful cluster of minds can’t talk to another blissful cluster of minds because as long as we’re happy, that’s fine. So you can kind of outline different aims in the civilization and then try to see what the astronomic approsa would they mess with.

And the interesting part is we can rule out some things. It doesn’t look like anybody’s moving galaxies around in the visible universe. That would be very noticeable. And that suggests that there are no civilization that are actually trying to stay together casually, either because there are no civilizations or because maybe this is not a good idea. Maybe the hedonistic utilitarianism is the answer. So actually, most of the universe had been seeded by a valence , eventually it will be all converted into hedonium. So we can do various observations and rule out some sets of possibility space.

Robert Wiblin: Just to backup a second, and make sure that I’ve completely understood, the idea is if the goal of this civilization is to run calculations, then you can get a lot more calculations done if you wait until you have an extremely cold environment, and indeed the efficiency of the calculations is equal to one over the temperature in kelvin above absolute zero. And because the temperature is going to go from three kelvin now to one over ten to the power of thirty?

Anders Sandberg: Yep.

Robert Wiblin: You would, if you waited until the heat death of the universe, basically, or just before that, then you’ll get 10 to the 30 times by three more calculations done than if you were to do it now, using the same method.

Anders Sandberg: Exactly.

Robert Wiblin: Can you explain why the energetic cost of running a given calculation is proportional to the temperature? Or is that quite complicated?

Anders Sandberg: I think it’s somewhat complicated. It’s related to what’s called the Landauer principle, although actually all the big names in 20th-century physics had something to do with this realization. When you erase one bit of information, you need to pay small thermodynamic cost. Basically, it has to do with that memory is a state that doesn’t change over time. Then you have an outside world, which is kind of a heat bath. That’s a lot of random things going on. You want to push the system into a particular state, and that requires interacting. The temperature matters. But it is pretty profound and important, I think we are still not fully understanding the full picture, here.

The physics of information is a fascinating emerging field. The Landauer principle is a firsthand that there is actually a strong link between information and thermodynamics. Now quantum computing, and the quantum information theory is also getting in. We’re starting to see the outline that information is something just as physical as entropy or energy. A century ago, of course, people would not even think that this was possible. We still don’t fully understand this. There’s some surprises. For example, it turns out that you don’t always have to pay an energy cost for a racing a bit. If you have a very ordered, in essence, structure, let’s say that you have a planet full of items that have all spends pointed in the same direction, by randomizing the spends a little bit, you can also erase information.

Basically, there are some other resources and energy you can use, but it doesn’t change the overall picture very much.

Robert Wiblin: I suppose it’s possible in our understanding of this is wrong, and the reason that this isn’t happening is because we would eventually realize that it’s wrong and not adopt the strategy, because it won’t work, or there’s something else that’s better. That’s right?

Anders Sandberg: Yeah.

Robert Wiblin: But it sounds like a pretty good approach. Sounds like, maybe, we should do this if humanity manages to stick around for a long time.

Anders Sandberg: I definitely think so. One reason I wrote this paper wasn’t so much the from the question as is this a good scenario for our future? Because when you start thinking about the future of humanity, it’s interesting to consider what should we want to do, what can we do? How grand could the future be? It’s quite useful to find the laws of physics that actually limit us to the ultimate extent.

Robert Wiblin: Okay, but it doesn’t seem like such a winner, because we don’t see anyone apparently trying to do this. If you were a civilization that’s contemplating this strategy, wouldn’t you be worried about hibernating, basically, or aestivating, and then another civilization appears and it just takes over the universe while you’re asleep. You need to be monitoring that, at least. You might, in fact, want to build quite a robust civilization, in a sense. Quite a visible civilization just to in order to make sure that you’re taking care of yourself and you’re going to last a very long time. Is that right? If you’re aestivating, would you expect to be very visible or not very visible?

Anders Sandberg: I think you don’t need to be very visible in the sense that the thing that you really care about, the bunker where the aliens are sleeping, probably should be hidden very, very well. But you want to have an infrastructure. You want caretakers to make sure your galaxy doesn’t go bad. Most importantly, things might evolve in your stuff. It’s a bit like having a kitchen but leaving it too long. Eventually things evolve in the sink. You want to keep that under control.

Robert Wiblin: You need to have some kind of error correction to make sure that you don’t eventually deviate from the eventual original plan.

Anders Sandberg: Yeah.

Robert Wiblin: Or some part of you doesn’t do that, like a tumor, like a cancer.

Anders Sandberg: Yes. So this leads to other interesting aspects of this scenario. For the aestivation scenario to work, the civilization actually needs to be coordinated enough so everyone can agree, “Let’s sleep to until 2 trillion years.” And nobody starts before and starts grabbing resources from the others. You need to have a pretty strong internal coordination. Then you want to enforce a certain coordination, both on the material universe in the sense that you don’t lose too much and also that newcomers don’t invade you, either because they come from abroad, you get an invasion from the neighboring galactic supercluster, or that some monkeys on some planet evolve and start to gobbling up planet and doing all sorts of mess.

This shades over into what’s sometimes called the zoo hypothesis in discussions about alien intelligence. The idea that the aliens are around, but we don’t notice them that they’re looking at us and letting us behave. Now the interesting question with the zoo hypothesis is what actions are not permitted? Because if the aestivation hypothesis is true, we should expect that starting to blow up stars and doing things that really reduce the value in the long-term future should be strongly prohibited. Presumably, that also involves sending out self replicating probes and really doing things. At that-

Robert Wiblin: Creating kind of a virus on the space that they want.

Anders Sandberg: Exactly. So you should expect some process to prevent that. Now, you could imagine the space police. Essentially you put little robots in the systems and when a civilization emerges and starts messing around with stuff they’re not supposed to, the space police shows up and tells them, “No, the rules in this galaxy are as follows. Don’t do this, don’t do this, and if you do that, well, that’s totally okay.” We haven’t seen this, yet. It might be an interesting thing to test, of course. It’s a very risky thing to test, for example, by building a self replicating space probe. If nothing stops you from doing that, then you can be fairly certain that you don’t have any overseers handling things.

The zoo hypothesis probably not a viable solution. The problem is, of course, instead of a friendly neighborhood space policeman, you might just have a big asteroid coming in. It is a bit risky to test.

Robert Wiblin: The other question was, if your civilization planning to do this, you might want to, for example, stop the stars from burning, because that’s going to preserve more energy, more matter, until the end of the universe.

Anders Sandberg: That is actually an interesting thing that is probably not true. It seems tremendously wasteful to have stars shining. When you think about the sheer amount of energy they’re releasing, that seems like it’s a total waste. Except that it’s about 0.5 percent of the mass energy that’s what gets converted into light and heat. The rest is just getting into heavy nuclei. If you can convert mass into energy, you might actually not care too much about stopping stars. If the process of turning off stars is more costly than 0.5% of the total mass energy, then you will not be doing it. I can certainly imagine that I would like to build Dyson shells and scoop up the energy right now and use for some nice things, but it might be that in the long run this is simply not worth the effort.

It’s interesting that the scale we’re talking about here makes things like galaxies, you might actually afford to lose a few galaxies if that streamlines your process. It’s still kind of telling that, yeah, we don’t seek any process preventing the emergence of blue white heavy stars that will turn into black holes and probably become fairly unusable. There are ways of preventing that from happening by iron seeding galactic gas clouds. The increased capacity means that when they collapse, they turn into smaller stars. Recent calculations are done suggests that actually maybe you don’t even need to do that.

Because that’s really, you anyway get so much more long-lived red dwarfs stars that you actually don’t need to worry about these blue white stars. Because we, right now, we’re in all of them. We see them across the star. This is just this very early part of the coniferous era. Wait a few tens of billions of years and you’re not going to have that many blue white stars. The universe is going to be much more staid and middle-aged. You’re going to have a lot of orange and red stars. Maybe it’s not actually even worth iron seeding the clouds because, well, you don’t need it. The interesting thing about this kind of hypothesis is, of course, what’s the value in proceeding pursuing it?

I did it partially because I wanted to calculate rational strategy for humanity but it also shows how weird many of the possible explanations for the Fermi paradox are. If you don’t accept that we are alone, which as I mentioned earlier I think is actually a plausible outcome, something is really odd about the universe. Something extremely strange is going on. Either something like the zoo hypothesis or the simulation hypothesis. Or that the intelligence always has very strong sociological convergence to behaving in a particular way or that it works itself out. You get a really strange and uncomfortable conclusion whatever the answer to the Fermi paradox is.

Robert Wiblin: What probability do you place on this being the explanation for the Fermi paradox?

Anders Sandberg: I think I will give perhaps 10% probability to the aestivation hypothesis. Maybe just because I came up with it, I’m fond of it. I don’t believe in it that strongly. I heard people being based surprised. “Why do you even write a paper about something you don’t believe in?” Obviously, they have never met a philosopher because philosophers do this all the time. Sometimes a line of thought is interesting to pursue. I think we have learned a lot of things from pursuing this. I think the real value is, yeah, we can think and plan for the extreme long-range future. Right now, we need to have a sustainable civilization. We need to fix existential risk.

We need to get our act together and become a more mature civilization. But beyond that point, we probably should think about our long-term future. What’s the pension plan for intelligence in the universe?

Robert Wiblin: You’ve been mapping out the space of possible options. I guess this is one that’s a bit weird, but in your view, not completely implausible. There’s a decent chance that this is what’s going on.

Anders Sandberg: Yeah.

Robert Wiblin: You pointed out that if we start using up the cosmic comments that the aliens that are aestivating might want to wipe us out so that we don’t end up taking things over and drinking their milkshake. But there’s another possibility that you’ve considered, which is the Berserker hypothesis, which is a bit similar. Do you want to explain that?

Anders Sandberg: So, the name the Berserker hypothesis haven’t got much to do with inebriated Vikings rushing around. It’s named after a series of science fiction novels by Fred Saberhagen, which is based on the idea that some alien civilization in the remote past built self-replicating killing machines. Not just autonomous lethal weapons, but star-faring autonomous lethal weapons that could build more of themselves. And then they wiped out the creators and now they’re around wiping out all life they can find.

Now, this idea has been around in the discussions about why we’re not observing any aliens, like, “Oh, maybe something like this is true. There are something dangerous out there wiping out civilization. The only surviving civilizations are the very quiet ones that don’t make a fuss of themselves, and that’s why we’re not observing anything.”

This has never been a super popular theory, not just because it’s slightly dark, but also because it seems so darn science fictional, and many people are turned off from that because it seems like you can’t make a rigorous argument about it. But I have been looking into it and I think I have some decent argument why this seems unlikely. And you can actually make an ecological argument. If you have self-replicating systems hiding out there amongst the stars, what if somebody were releasing another strain of them? Well, they would need to wipe that one out, of course. It’s also an opponent.

So, when you try to model the ecosystems, you find that it doesn’t seem to be a stable equilibrium situation. Instead you end up with more and more resources being used to build more and more killing machines against the other killing machines and you end up with a universe that looks very different from ours. Basically, it’s gigantic space warfare everywhere, which should be pretty obvious. But there’s not-

Robert Wiblin: So, this would make sense if one group did it, but not if it was a very common thing that almost all civilizations do when they’re very numerous, because then you would see a huge ware going on, basically.

Anders Sandberg: Yeah. And even if you don’t have that many civilizations, it needs to be robust. In order to work as an explanation against the Fermi Paradox, it needs to be able to withstand that a civilization shows up somewhere and tries to get in on the game. So, they need to be effective at wiping out civilizations, and if civilizations manage to launch their own replicators, they must somehow be able to wipe out those replicators, too. And it’s an interesting thing, but took us about 100 years since Hertz sent the first radio waves until we were doing fairly regular space flight.

We still haven’t really colonized the solar system, but we could imagine a counterfactual world where we really pushed for that, and then we would probably have something like the 2001 universe with the moon base and things like that. So, I think it’s not implausible, but you could, maybe takes 100 to 200 years if you’re fast for a civilization to go from radio to being out in space and being very hard to wipe out. So, that would suggest that if you have these self-replicating killer machines out there, they need to intervene within a century. Otherwise, civilizations are going to get away, and you don’t have a proper explanation for the Fermi paradox.

Now, you can do a probability test, given that we’re still around here, and that’s starting to show that it looks pretty implausible that this hypothesis is true. It’s more likely that actually there are no killing machines, because otherwise the system would seem to actually not work very well. Now, there is a sting in the tail, though, because what’s the best defense against self-replicating machines that want to attack you? Well, you want your defense machines. You want your police system. You want to actually have your own defenses out there, and if they notice some alien device trying to blow up your star, or throw asteroids at you, or do something else nasty, it should stop it.

So, you need your police systems and they need to be widely dispersed and self-repairing and basically end up with self-replicating interstellar killing machines. But these ones are on our side, so they’re Freedom Fighters, and the police, and lawful. So, you have a symmetric situation. I think advanced civilizations will probably surround themselves with a swarm of support systems, whether they’re really intelligent and independent or very simple might depend very much on values and safety. But you should expect them to have a lot of autonomous systems that actually support them and protect them. So, that might actually be the most likely artifact we find from an alien civilization.

Robert Wiblin: When I heard this hypothesis, my initial reaction was, “Why would anyone make these probes?” So, if you’re the first group in the universe and you assume that you’re alone, why would you create Berserker killing machines that kill you and kill anything? Is this meant to be an accident or a deliberate strategy?

Anders Sandberg: So, in Saberhagen’s novels, I think the idea was that it was an accident. And you can certainly imagine a civilization that does something stupid. You can also imagine a civilization that has really weird values, saying like, “Oh, life is really bad. We are radical negative utilitarians, we just want to wipe out all life henceforth, so let’s unleash these devices even though they attack us.” That might seem relatively unlikely.

You can also have something like the green aliens that decide, “Oh, the universe looks lovely and we’re gonna prevent people from industrializing it or changing it too much. So, we actually just want to have these out and keeping things in order.” And then you can, of course, add more and more various assumptions. Like somebody thinking, “Maybe there are green aliens out there. We don’t know yet whether we want to industrialize the universe, but we want to retain our options, so we want to have our police systems getting our space.”

Basically you have quite a lot of very different goals that lead to the launch of these systems. Certainly there might be some civilizations just going, “What? Why should anybody care about this? We’ve got it nice here on our planet. We’re gonna stay here. We’re not gonna mess with it.” But if they don’t do that, then we’re of course going to be at the mercy of a civilization that do some form of this replicating, spreading influences.

So, it seems to be an attractive point, even though what you then use it for is very dependent on your own aims philosophically and culturally.

Robert Wiblin: But you’ve come up with some arguments for thinking that this actually isn’t as likely as it might first seem. So, what probability do you place on it, all things considered?

Anders Sandberg: I think I would assign much less probability to this, perhaps one percent or less.

Robert Wiblin: That’s a relief, I guess.

Anders Sandberg: It’s a bit of a relief. It’s also an interesting challenge, because we might then consider should we do it? Even if we think that we might be alone in the universe, actually having an expanding infrastructure could be useful, even if we haven’t yet decided what it’s for, especially if there might be something dangerous or scary out there. So, at the very least, many people would say we should be sending out replicating probes just to observe the universe.

Now, the problem here is of course, if you have something that replicated a number of times, you might worry about it evolving. Errors happen, mutations happen. Many people underestimate how unreliable you can make technology. You can actually use error correcting codes to make it very unlikely. Essentially, probability one, it will not once in the history of the universe actually change its programming. That is very, very easy to do if you design it well.

However, you might also say, “Yeah, but you can also have cultures and settlements. If we go out and colonize space, some of those settlements in space, we’ll also colonize.” And now you have another form of replication. Not so much our machines, but our cultures, and cultures change. So you could imagine a culture being very keen on colonizing, and they will of course want to send out many colony ships and colonizing other planets, and before you know it, you have a lot of pro-colonization planets that are sending out more and more ships, and then you eventually approach some kind of interstellar intergalactic locust storm just living for settling more and more worlds, which might not be a good use of the resources of the universe.

So, this scenario is called Burning the Cosmic Commons. It’s originally by Robin Hanson. But notice what’s going on here. You have a lot of generations that allow you to converge to something. If you could spread over vast regions without having too many steps, you might avoid this. So, in a paper I did with Stuart Armstrong, we analyzed exactly how far can you spread using the resources of one solar system, and we found that if you take apart the planet Mercury, which today might sound like a very tall order, but on the astronomical scale, it’s actually fairly small. It takes a few decades if you do it at a leisurely pace. You can build enough solar collectors to use an afternoon sunlight to launch probes to all reachable galaxies.

These probes don’t have to be very large. Essentially you want something that can land on an asteroid, build solar panels, build more equipment, mine that, and build something including of course, things to seed other systems.

Robert Wiblin: So, you just need the seed to get things started?

Anders Sandberg: Yes.

Robert Wiblin: Although I guess it has to be able to grow in a wide variety of different environments that you don’t know where you’re gonna be arriving.

Anders Sandberg: That is an interesting issue, because when we start talking about space colonization, typically we are stuck in this mode of thinking that there has to be the kind of blonde guy from the space patrol landing his spacecraft on a weird planet where the sky has some random color. And in that case, you get a lot of very different environment, because we are interested in this very unusual terrestrial planet. But if you look at the solar system, most surfaces are essentially asteroid regolith. It’s dry gravel in microgravity with various levels of water and iron in it.

Robert Wiblin: And solar energy reaching them.

Anders Sandberg: Yeah. They are very, very homogeneous. So, if you can make something that can mine that kind of asteroid, and they seem to be ubiquitous across the universe, then you have an ecological niche which is ridiculously large. Yes, there are gas giants and planetary surfaces that are very different that these devices would not be directly able to make use of, but this is already enough to build a lot of stuff, including of course maybe habitats where you can then culture the cells and have humans grow up, or if you have artificial intelligence or uploaded minds, live in software.

The point is, the universe is actually very homogeneous, so you could spread across the big universe over ridiculous distances. It’s actually easier to send a space probe that can travel over interstellar distances to another galaxy than across our galaxy. And the reason is when you have a space probe that’s moving very fast, if it runs into a piece of gravel, bang. That’s going to do a lot of damage. The closer you are to light speed, the more energy you get and essentially when you are in a few tenths of percent of light speed, now you’re starting to look like grenades and small nuclear explosions, which is not very good for your probes.

You might want to send more redundancy, but if you have dust cloud, you’re pretty likely to run into something. But between the galaxies, the coast is very clear. So, if we could send probes at a distance of 7.7 light year, then we could hop between stars and reach most of the Milky Way. If it goes slightly shorter, then we kind of strand on a little island. If you can send it at 7.7 light year, you can probably send much more, and it turns out that to reach essentially all galaxies within five giga parsec, well, they only need to be able to travel a few million light years in order to do that.

But most likely, if you can travel a few million light years, you can use the same probe and travel a billion light years. So, the main problem here in expanding over ridiculously large distances is basically how much dust is in the way. That determines how many probes you need to send or if you need to go around with us, and how fast can you go, which is perhaps the most important thing. Because right now, we have sent a probe that could have reached another solar system, the Voyager probe. It’s not aimed anywhere in particular, it’s just going to float through space. But we could have aimed it so it would eventually get to our solar system in a few tens of thousands of years.

It wouldn’t do anything useful, of course, except bearing that plaque and that recording of the music of Earth, but with 1970s technology, we could already reach the stars. We might want to go much faster, of course, and that is the interesting part here. Speed seems to be what really wins. So, when we’re considering whether we should start colonizing the universe now or wait, it actually turns out that if we wait for a while, and the universe expands, some galaxies become harder to reach, but we presumably have better technology and once you can move really fast, that’s actually the best way of reaching remote galaxies.

So, you really want to go very close to light speed. Not super close, because then you run into dust. But if you can get up to 90% of light speed, you get most of the reachable universe. There is an upper limit, because there are galaxies we can see, but we can never, ever touch. Even if we went at exactly light speed, we would never catch up with them before they ran away from us because of expansion.

Robert Wiblin: And how is this probe gonna slow down once it gets to far away galaxies?

Anders Sandberg: So, in our paper, we did models where we imagined that it essentially had a little rocket. So, we assumed the launch was using a coil gun, essentially electromagnetic coils accelerating the probes to a higher and higher speed. You can probably do this much better with a laser, but we didn’t do that calculation in the paper.

But when it arrives, you would have this retro rocket, then we analyzed what could happen if you had a nuclear rocket, a fusion rocket, and an antimatter rocket. And that is, of course, pushing the limits of technology. We haven’t build that kind of rocket very well yet. But we think we understand the physics well enough. However, you might not even need to slow down that way, because the expansion of the universe means that you actually need a lot of velocity to catch up with remote galaxies.

So, if you’re aiming at the remote galaxy, as you’re traveling, it will move away from you faster and faster, and if you time things right, you will arrive with zero velocity. Your rocket will kind of just drift in and park itself there. So, you might actually not need all of this fancy technology to slow things down. Or you might use things like creating an electromagnetic field around it to slow it down by passing through gas clouds. So, there are quite a lot of options here, but I do find it fun, the fact that the expansion of the universe itself can be used as a braking method.

Robert Wiblin: So, if you were really committed to colonizing as much of the universe as you could, would the best strategy be to spread out to the surface of the sphere that you can reach in every direction, and then once you’ve reached the outer limit, then move back inwards, back towards the core?

Anders Sandberg: Not really, because by the point you reach that outer limit, the farthest galaxy we can reach, now the center, the solar system has become unreachable.

Robert Wiblin: Okay.

Anders Sandberg: Because-

Robert Wiblin: You can’t get back.

Anders Sandberg: Yes. There exists actually a distance about halfway from an event horizon that is the outer part we can reach, which is the point where that’s the further you can go and then send a message back to Earth. So, if we actually want to inventory everything that exists, see if there is any aliens, and we are committed to staying ourselves on Earth, that is as far as we can go. We can only get about 2.5 giga parsecs worth of data.
So, the best strategy I think is you send out probes, a few of them, to every reachable galaxy, and that’s literally millions if you only travel by a few percent of light speed and billions if you’re kind of at 90% of light speed. And once they have arrived somewhere in that galaxy, that’s a random star, they pick up a random asteroid, they start building infrastructure. They build a little Dyson shell, and now we send out 300 billion probes to the nearby stars in the galaxy. Now we have two generations from the solar system to every solar system that can be reached.

Now, this of course, avoids that Burning of the Cosmic Common thing. We only use a ridiculously small amount of resources, essentially a Mercury sized planet in every galaxy. Big on our current civilizational scale, of course, but on the cosmological scale, this is totally invisible, it’s really tiny. And what do you do with all this? Well, that depends on your values. That depends on the programming you put into the probes. So, this might be that you just get the data, you might set up policing so no aliens or human colonies do anything. You might colonize it all. You might set it up so after aestivating in the far future, you’re gonna fill it with happy human or alien minds. Or something else.

So, this is an estimate of how much power any intelligent species can have over the material

Robert Wiblin: What’s the name for this strategy of you’re trying to reach everywhere very quickly?

Anders Sandberg: So, the paper, we called that Eternity in Six Hours, but my nickname for the paper is Spamming the Universe.

Robert Wiblin: Why six hours?

Anders Sandberg: Well, it turns out that you only need six hours of sunlight to actually power all the probes to go to all of the reachable galaxies.

Robert Wiblin: Nice. Nice. We’ve done about an hour here on these grand possible futures across the entire universe. Maybe let’s bring it back towards more practical decisions that people might be able to make now.

Anders Sandberg: Well, I think it’s actually worth recognizing also why one should want to think about this form of stuff, because this is really about the value of a very, very far future, and it’s actually the path that led me to think about existential risk and many more other practical things. As a kid, I read Barrow and Tipler’s The Anthropic Cosmological Principle, which is an amazing work, and the final chapter was talking about intelligence taking over the universe, overcoming the heat death of the universe, and essentially affecting all things affectable. That was kind of my brush with religion as a young nerd.

Over time, their physical theory has had some problems, but the overall idea about what is the value of the far future is important, because that gives us a reason, of course, to try to defend the future. We want to avoid existential risk that could mean that we would never get this grand future. We might want to avoid doing stupid things that limit our future. We might want to avoid doing things that create enormous suffering or disvalue in these futures. So, what I’ve been talking about here is kind of our understanding about how big the future is, and then that leads to questions like, “What do we need to figure out right now to get it?” Some things are obvious, like reducing existential risk. Making sure we survive and thrive. Making sure we have an open future.

Some of it might be more subtle, like how do we coordinate once we start spreading out very far? Right now, we are within one seventh of a second away from each other. All humans are on the same planet or just above it. That’s not going to be true forever. Eventually, we are going to be dispersed so much that you can’t coordinate, and we might want to figure out some things that should be true for all our descendants.

Robert Wiblin: So, your point is that once we’ve spread out across the universe so far, then there’s no way of sending messages to tell everyone what to do. They all have to be working on the original instructions and interpreting them in the right way, otherwise they’ll get out of sync?

Anders Sandberg: Exactly. And for many situations, this is totally fine. You don’t need a central planet to tell you what kind of art or philosophy to work on, perhaps. But we might want to have some ground rules that’s always true. If we encounter aliens, well, what are our plans for how to divide the universe with them? Because once you spread out of sufficiently long distances, given that it doesn’t look likely that the supraliminal communication and travel is possible, that means that parts of your civilization will never be in causal contact.

And there might be things for the really far future where we need to coordinate across these causal boundaries. Some really big engineering things involves actually planning that, “Okay, in a hundred billion years, we want the galaxy clusters to be organized like this.” There might be moral things that we want to avoid certain moral hazards. And that suggests that we might want to think about how we want to set up a process now, over the next few hundred years. Whether that is years, or decades, or centuries, when we’re all on one planet or in one solar system, before we start doing the grand stuff, so we don’t mess it up too much.

Robert Wiblin: Reminds me slightly of, I think during the British Empire, there were various points when a particular colony could be out of touch with London for months or even years, ’cause it took so long to send messages and people weren’t traveling there very frequently. But it’s a much more extreme example of that where you never get messages back and you can never talk to them anymore. Interesting.

Robert Wiblin: OK that wraps up the first half of my interview with Anders!

In the next section we’ll cover efforts to end ageing, the likelihood of nuclear war and the unilateralist’s curse, among other topics.

In the meantime let your friends know to subscribe!

The 80,000 Hours Podcast is produced by Keiran Harris.

Thanks for joining, talk to you next week.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths - from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected]

What should I listen to first?

We've carefully selected ten episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe by searching for 80,000 Hours wherever you get podcasts, or click one of the buttons below:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.