Enjoyed the episode? Want to listen later? Subscribe by searching “80,000 Hours” wherever you get your podcasts, or click one of the buttons below:

  • iTunes
  • SoundCloud
  • Stitcher
  • RSS

I would be willing to put myself $7 million in debt in order to survive. Yeah, it would be hard to work off, but it would still be worth it … multiply that $7 million by some probability of cryonics working – which I think is better than 5% – and the expected value is big enough that I’m happy to pay the life insurance that funds my cryonics.

Dr Anders Sandberg

Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded?

According to our last guest, Bryan Caplan, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees.

Like Stalin he has eyes for his own immortality – including an insurance plan that will cover the cost of cryogenically freezing himself after he dies – and thinks the technology to achieve it might be around the corner.

Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University.

The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever – among many other questions.

Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford.

His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Last time we asked him why we don’t see aliens, and how to most efficiently colonise the universe. In today’s episode we ask about Anders’ other recent papers, including:

  • Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done?
  • How much is our perception of the risk of nuclear war biased by the fact that we wouldn’t be alive to think about it had one happened?
  • If biomedical research lets us slow down ageing would culture stagnate under the crushing weight of centenarians?
  • What long-shot drugs can people take in their 70s to stave off death?
  • Can science extend human (waking) life by cutting our need to sleep?
  • How bad would it be if a solar flare took down the electricity grid? Could it happen?
  • If you’re a scientist and you discover something exciting but dangerous, when should you keep it a secret and when should you share it?
  • Will lifelike robots make us more inclined to dehumanise one another?

The 80,000 Hours Podcast is produced by Keiran Harris.

Key points

When you start thinking about life extension, it has one obvious implication. It’s actually a very good way of saving quality adjusted life years, because you’re directly trying to save life years. Now, cryonics might be a relatively inefficient way of doing that. That’s probably better from a perspective of preserving your own personal identity.

But given that 100,000 people die each day of age related conditions, that seems to suggest that actually quite a lot of value at stake, so the scope of aging as a threat is tremendously important. It’s also a somewhat neglected area, because for a long time, people just assumed that you can’t do anything about aging, it’s a law of nature. Now we’re starting to understand the science behind aging and actually, it can be modified. Actually, there are ways of doing it.

70 years of nuclear peace means that, well, maybe the world is really safe. Maybe actually political decision makers are very safe and safeguards are really good, or it might just be that we’re one of the few planets where nuclear weapons exist that has been really, really lucky and have some surviving observers. So my paper’s about the question, “Can we tell whether we live in a safe world or a risky world?

If you find something that is very preserved in evolution, you should suspect it’s important, even if you don’t understand what’s going on. Sleep seems to be one of these things. It seems that actually being unconscious, even though predators might get you while you are asleep, is still worth having. We don’t really know why we sleep. It seems that messing with it, removing it might actually be a very bad idea.

At the same time, there seem to be a high degree of value in improving sleep. At the very least, we should make sure that we can sleep well, because it affects our function and health tremendously. People who sleep too much and too little have much higher mortality.

Articles, books and blog posts discussed in the show

Learn more about relevant career options

Transcript

Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, the show about the world’s most pressing problems and how you can use your career to solve them. I’m Rob Wiblin, Director of Research at 80,000 Hours.

This is the second part of my conversation with Dr Anders Sandberg.

The first episode came out three weeks ago. If you haven’t heard it already there’s no need to listen to it first, as each half covers different topics.

But hopefully after listening to this half you’ll want to go back and hear the other half: it’s episode 29 – Anders Sandberg on 3 new resolutions for the Fermi paradox & how to colonise the universe.

Just as a reminder, Anders is a researcher at Oxford University’s Future of Humanity Institute where he looks at low-probability, high-impact risks and estimating the capabilities of future technologies. He got his PhD in computational neuroscience for work on a neural network modeling of human memory.

If you enjoy the episode be sure to post it on social media and let your friends know about the show.

Without further ado, here’s more of everyone’s favourite Swedish polymath.

Robert Wiblin: I know that you’ve got a cryonics plan, right? So, you’re hoping to live for quite a long time yourself, ideally. Do you think that’s an important priority that we should be focusing on now, ensuring that people like you and me don’t die?

Anders Sandberg: I think individually, it’s a pretty important priority. Is it important as a shared priority? Well, it depends. When you start thinking about life extension, it has one obvious implication. It’s actually a very good way of saving quality adjusted life years, because you’re directly trying to save life years. Now, cryonics might be a relatively inefficient way of doing that. That’s probably better from a perspective of preserving your own personal identity.

But given that 100,000 people die each day of age related conditions, that seems to suggest that actually quite a lot of value is at stake, so the scope of aging as a threat is tremendously important. It’s also a somewhat neglected area, because for a long time people just assumed that you can’t do anything about aging, it’s a law of nature. Now we’re starting to understand the science behind aging and actually, it can be modified. Actually, there are ways of doing it.

But a lot of people in biology or gerontology are very loathe to actually talk about life extension or slowing aging because it sounds so much like snake oil and crazy alchemy. So, I think this is an area that is both somewhat tractable and neglected and the scope is tremendous.

Robert Wiblin: So, you’ve got your cryonics plan, but you’re saying that’s not that cost effective, probably, because it’s reasonably expensive per person. It’s not incredibly expensive, but it’s reasonably expensive, and also probably won’t work. It might work, but it’s likely not to. But there’s other approaches that we could extend our lives quite a lot, and perhaps extend them enough that we can always stick around for long enough to get future advances that will allow us to just keep on stringing along our lives, and those approaches are more like biomedical research into how do you slow down aging. And that looks much more cost effective per person.

Anders Sandberg: I think so, yeah. When I think about cryonics, I do this mental calculation that, “Well, what’s the value of my life to me?” And even if I ask you some standard statistical value of life, like $7 million, and I can totally imagine that I would be willing to put myself $7 million in debt in order to survive. Yeah, it would be hard to work off, but it would still be worth being alive for doing that, times some probability of cryonics working, and I think it’s better than five percent chance of working. In which case, actually, the value of cryonics is kind of big enough that I’m happy to pay the life insurance that pays for my cryonics.

But that’s of course only good for me. Maybe I’m a nice person and actually adding some value to other people. I’m certainly hoping so, but it still seems to be relatively conceited to argue that, “Oh, I’m worth so much that it’s cost effective keeping me alive.”

Robert Wiblin: Yeah, rather than just funding new education for new people or something like that.

Anders Sandberg: Exactly. So, that is actually an interesting philosophical argument about life extension. Why do we want to have current humans alive long into the future when we could just as easily replace them with other humans? And I think from my kind of direct experience point of view that the sheer value of being alive, there is not much difference here. However, we lose a lot of things when people die. Life experiences are gone. All that human capital that’s been very slowly amassed. In some cases, even wisdom has been learned through a lifetime. Not always, but often enough. A lot of things are getting lost when people get older, not just in the sense that people eventually die, but there is a long period of decline where you might actually have a lot of knowledge and experience, but you don’t have the energy to use it.

So, you have old people who actually know important things, but they can’t work on making them real.

Robert Wiblin: And I suppose even before, people would die or get so sick that they can’t work anymore. People reach kind of a peak, perhaps, of wisdom around 60, and at that point, the extra wisdom that they’re accumulating each year is outweighed by the brain kind of decaying and forgetting all the things that they’ve learned. So, that’s kind of when you’re about the most wise, and then you start going back down again. But we can prevent that and just allow ourselves to become wiser and wiser almost indefinitely.

Anders Sandberg: Yeah. It might be that eventually it levels off. It might be that eventually a human brain cannot hold an arbitrary amount of information, so you would start forgetting something.

Robert Wiblin: But we don’t seem to be at that limit yet.

Anders Sandberg: No, not really. We certainly haven’t seen anybody having the problem of “memory full”. And there are some people who’ve got these tremendous autobiographical memories, both kind of paralyzing, because we have a hard time generalizing. It’s almost like Borges’s short story “Funes the Memorious”, but these people actually exist for real. But even they don’t run out of memory. Our brains have a tremendous capacity.

Robert Wiblin: It’s actually quite amazing, ’cause you would think there’s just only so many images and so many sounds and so on that the brain can store as a memory, but apparently it’s a very large amount.

Anders Sandberg: Yeah. And part of the secret is of course, we actually don’t store it photographically. It’s not like a videotape. But rather we have a representation, and even the people with this super autobiographical memory, they actually don’t exactly remember things as they truly were, but make abstractions.

Robert Wiblin: So, it’s more like we remember some words that describe it and then our brain regenerates it from the description.

Anders Sandberg: Yeah. And in most of us, of course, we do an enormous amount of regeneration, which is also why you shouldn’t trust your own memory very much, because yeah, that’s an interpretation of what happened, but it’s not actually what happened. This is why witness testimony is so problematic in court. It’s very compelling, because it’s a person telling a story, but it also has relatively little to do with what actually transpired.

Robert Wiblin: Yeah. Okay. So, let’s pick up from that. You were saying that there’s benefits from extending people’s lives and preventing aging and you’d have wiser people sticking around longer.

Anders Sandberg: And not just wise. Also productive people, people with skills, and people able to combine different careers. I think it’s quite important to have different experiences, because that enrich the way you can solve problems. Somebody who’s only been working one kind of task all his life is not going to be very flexible when interesting things happen. You want to have different kind of experience, and you might not even know what kind of experience will turn out to be useful. But typically, there is a multiplicative effect when you can combine some social skills, some scientific skills, some economic skills over time. And for that, you need a lot of time. It’s not like you can just go to school and learn it all and then make use of it.

Robert Wiblin: Okay. But is this a top priority? I guess, you’re at the Future of Humanity Institute. In your view, I imagine there’s a significant risk that we’re gonna go extinct, that we could have a nuclear war, we could have a pandemic, could invent some new technology that’s very dangerous. You’re particularly worried about artificial intelligence, I guess. How many resources should we be putting into life extension?

Anders Sandberg: Well, to some extent of course, it’s more about a matter of how do you allocate resources into already existing sets of resources? So, right now for example, the World Health Organization is thinking about its priorities 2019 and onward, and they’re actually probably making a big mistake by not making aging a big priority fortunately, the WHO responded to public comments to include it, because it’s responsible for almost half of the lost life years, even if you disregard aging per se, just care about cardiovascular disease, Alzheimer’s, diabetes, cancers. Aging is the root cause for many of these. And you need to, if you just say, “We want to fix these diseases, we don’t really want to extend life.” Well, doing something about the aging process is actually probably the most effective way of doing that.

It’s of course tougher than fixing individual aspects of cancer of diabetes, but it probably has a bigger long term pay off. Now, in the big picture, would I say, “Let’s take some existential risk money and move over to life extension or vice versa”? And I think that is probably, I think right now, existential risk might actually be more important in the large. However, we actually have quite a lot of misallocation of our health budget. Right now we’re looking at quite a lot of symptoms.

The American Institute of Aging is doing a lot of tremendous research about the diseases caused by aging and relatively little about the root cause of aging. Intervening against aging is starting to work really well in the lab. There are some very interesting work on the hallmarks of aging, the kind of fundamental biochemical and biophysical processes underlying it, and we even know ways of intervening in most of them.

In many cases, of course, moving that out of theory in the lab into the clinic is going to take a long while. But it seems to be a worthwhile thing to do, especially given that we have a virus demographic problem. So, we’re getting an older population, which means that if we have a large older population that is also suffering from a lot of chronic disease, that’s a tremendous amount of pain, lost opportunity, and cost for society. So, I do think that this needs to be a way higher priority.

I do think the correlated risks from existential risk probably trump that, but when we get into issues of health and economic development, we probably want to push much more into slowing down aging.

Robert Wiblin: Okay. So, we shouldn’t fire you and take your salary to fund anti-aging research, but maybe we should divert a few percent, or 10 percent of our healthcare budget towards anti-aging research. In the immediate term, that might be bad for people’s health, because the research is going to take a while to pay dividends and they’ll have a smaller budget, but pretty quickly, you think, that the returns from that research would be so high that within 10 years’ time, even though the budget for directly providing healthcare is lower, we’d still be better off because we would have learned how to prevent aging, and people would have far fewer diseases to treat at that point.

Anders Sandberg: Exactly. It’s very much a long-term investment, and you also get this very nice spin-off effect. If people who are older are healthier, well, they’re of course more productive citizens, they’re paying more taxes, they’re not costing as much by being in the hospital, so you actually get what the researchers called the longevity dividend. You actually get quite a lot of economic value to society this way. This is a bit like trying to reduce parasite burden and improve schooling in developing countries. Once the kids actually get better brain development and get better information to the brain, they help the local economic growth rate quite a bit, and that has a lot of positive knock-on effect. You want to get these positive feedback loops going.

Robert Wiblin: So what about the possibility that life extension would actually be bad for society as a whole? I’ve heard a bunch of different theories, some that I don’t agree with that much, but some that seem plausible. One thing is in the past very often bad governments, authoritarian governments, have ended when the leader of the country dies because then there’s a reshuffling of who might be powerful, and you potentially get a transition to democracy just by increasing the variability of the outcome. If Kim Jong-un could live basically indefinitely, then it seems like North Korea’s prospects are much worse, because he’ll just be able to remain in power almost indefinitely, and he’s not going to die, and there’s not going to be any period of turbulence during which things could improve. Have you thought about that issue?

Anders Sandberg: Actually I have. So I’ve been playing around with a little statistical model of what is the role of aging in getting rid of bad political leaders, and it actually turns out that political science people have done lovely databases of political leaders, and you can get indices. You can even measure the ones that are most authoritarian. Then you can do a statistical model, a Cox proportional hazards model for those who are interested, to actually see the role of age in changing the probability of losing power, and it turns out that we can use this to model a world where these leaders don’t age, and on average they are in power four more years.

Robert Wiblin: Is that all?

Anders Sandberg: That is all.

Robert Wiblin: How long is that average time to begin with?

Anders Sandberg: Well the average time unfortunately tends to be something like 12 years or more, so-

Robert Wiblin: So it increases it by a third on average, but because it’s only 12 years to begin with, it’s not so bad.

Anders Sandberg: So if you’re a young dictator, you just come into power, at first you have a very high risk of losing power very quickly because you have a lot of enemies around. So typically the hazard is high at the start, and then they tend to decline, because authoritarian rulers get rid of their competitors. Then it stays relatively low, and then slowly goes up over time, and part of that increase is of course due to aging.

Now the interesting thing here is why do people lose power? Well, it turns out that being so old that you can’t hold on to power is a relatively rare thing. Relatively few dictators actually die at home in bed. In fact, most of them fall prey to the other scary people they surround themselves with. In that picture of the junta of the country, the other people in sunglasses surround the El Presidente, they are the ones to look out for because they are would-be dictators. They have a lot of power, and eventually they might get fed up on waiting. So if nobody were aging, I don’t think dictatorships would be changed that much.

In fact, if we want to think about negative effects of life extension, I think this might have a much bigger effect for example in academia. After all, the professor, if the professor is never really getting older, is just getting more and more skilled both in maybe education and even more in academic intrigue, they’re just going to hold on forever. There is this Pauli Principle that maybe science advances one funeral at a time. It’s debated. There have been attempts at actually investigating it, and the conclusion is somewhat mixed. Sometimes revolutions actually do sweep academia without replacing people. In some cases it does seem to be generational, but you can certainly imagine many institutions where it would be relatively easy to hold on to power indefinitely, but this is a very foreseeable problem, and I think the solution also is rather simple, term limits.

Robert Wiblin: Term limits. Yeah.

Anders Sandberg: Because in my dictator data, I of course also have political leaders from non-dictatorial countries, and they of course generally stay in power for one term or maybe two terms if they are really successful, and then they disappear, and we might want to have term limits for jobs. Maybe you’re not allowed to have the same job for more than a century. Then it’s time for somebody else to try.

Robert Wiblin: Yeah, so looking even more broadly, what if we managed to extend people’s lives but not ensure that they didn’t become say very conservative with age? You could imagine that everyone’s living 300 years, and that means that their beliefs are quite crystallized, and they’re not very open to new ideas as they’ve gotten older. Do you think that’s a risk that society could end up being a bit sclerotic because so many people are very old and have very established views and are unwilling to change them?

Anders Sandberg: Maybe, but I think it’s worth investigating. In fact, you can compare societies with different age distribution and try to see do they seem to be sclerotic, and I don’t think we see very much. If you look at Northern Europe where people are certainly living for a long time, yeah, it’s maybe a bit conservative compared to some African nation, but in many cases I think those African nations seem to be almost more sclerotic in their outlook on how things could change, so it has more to do with culture, and this then I think is a reason for hope.

So just a personal anecdote. So my grandmother, she’s now 107 years old. When I got married, I was a bit concerned. What do I tell her? After all, I’m marrying a guy, and her reaction was of course, “Oh, well, it’s modern times.” Having a political discussion with her is really weird because she outlived the Soviet Union. She was born before it and it fell, and her views on the Swedish educational system were about the debates in the 50s and 60s. Yet, she’s reading the newspaper, and although she’s conservative by modern standards, she’s quite willing to argue. So the scleroticness here might not be coming from age, and in fact if you have a long-lived population you might actually have a benefit of being a bit cautious and slow-moving. You have time. You might actually not need to change many things quickly.

It’s really only the areas where you need quick decisions that you might be worried that older people are worse at making quick decisions, but part of that might also be that right now our intuition about old people are biologically old people who actually have a slow conduction velocity of their nerves. They actually do react not quite as quickly as young people, but in a world with life extension, they’re presumably also going to be pretty quick on the uptake, and they’re also going to have a lot of experience. So I’m not super worried about this picture of a sclerotized society.

I think it has much more to do with how you build your institutions, and I think we should be aware of this risk, but even with our current lifespans, we want to be aware of this risk. We want to construct this institution that can update fast enough. We have already had trouble with the institution for regulating technology and economics, which are changing much faster, and that’s without people being extremely long-lived. The real problem is that the political decision maker have no idea about that internet thing and what people are doing in the biotech lab.

Robert Wiblin: Actually, I heard about a study of age and political identification in the United Kingdom in recent times, and they found that people’s political views were in flux when they were young and then were stable during middle age but actually became more flexible again and changed more quickly after people retired, and initially the researchers thought, “Oh, probably this is because those people are suffering cognitive decline, and they’re becoming confused and it’s hard for them even to remember what their stable views have been,” but apparently that wasn’t the issue. So is it the case that people who are getting some kind of early stages of dementia were more likely to change their political views? But that wasn’t the case at all. So yeah, it’s a somewhat surprising result and counter-intuitive result that people, as they get older, might even be more open to changing ideas, but it could be an issue with leaving the workforce allows you to maybe spend more time thinking about these things again. There’s a bunch of different hypotheses you can have about that.

Anders Sandberg: I think part of it might also be vested interest, because if you’re part of the workforce, you have an identity that might be relatively stable. Now when you’re retired, you get to define your identity in a different way, and it might be that a long-lived population will both have to reinvent themselves a number of times. I think you actually need to do that in order to survive as a person in a really long, indefinite time, but also the interests you might have might drive you in different directions. So one possible nightmare scenario is of course that you get the gridlock. You have a lot of old people who are voting for things that are good for them, making sure that their retirement funding is really solid, but who needs those youngsters?

Robert Wiblin: Everyone who’s young and middle-aged just gets taxed at 80% to fund the old people having a great time.

Anders Sandberg: Yeah. That is the nightmare scenario, and it’s also pretty obvious that that’s probably not going to be stable for long because you might actually get that revolution, but you might also have this interesting thing that we want to construct a system that actually encourages young people to do something. Once upon a time a young person who couldn’t find a job could migrate and go somewhere else, or later on could get into one of those newfangled businesses that nobody knew anything about, whether that was newspapers or in the making on the internet startup. In a world with life extension, of course, the old-timers are probably going to be just as good at making internet startups or quantum mechanic startups or whatever it is in the future as the young ones. They’re just going to have more experience and more capital, but they might also have more vested interest, more social links, in their current circumstance.

So it might be relevant for them to fund the youngsters starting up new stuff. So there is a very interesting problem about how you transfer intergenerational wealth and influence and we need to work on that. But, again, life extension is just going to amplify an issue that already exists. Once upon a time by the point your parents died and you inherited the farm, you were old enough to handle the farm. Today, it’s going to be about your retirement age, which means that actually this gap between generations becomes very unstable.

Robert Wiblin: I imagine that a lot of people who were listening to this conversation for the last 10 minutes about life extension would be very skeptical that, in practice, very much can be done in medicine or through scientific research to extend people’s lives beyond maybe 80 or 90. Do you want to say anything to try to convince them otherwise, that this is actually something that we might be able to do within our lifetimes?

Anders Sandberg: Well, the tricky part here is within our lifetime. So most of the listeners will have several decades ahead of them, at the very least. Decades in biomedicines is an enormously long time. So when I got interested in life extension seriously back in the 90s and started reading up books, the generally scientific view was, “Well, there is aging. We don’t fully understand what’s causing it and we don’t really know how to modify it except by a few interventions like calorie restriction and a few interesting lab experiments.” And then that changes in a few years. Cynthia Kenyon demonstrated that through a few genetic modifications, nematode worms could live several times longer. And then we found a lot of other methods of making mice live longer and we started to understand that the biomedicine of aging itself. We actually develop within about 10-15 years a lot of understanding, which is still very far away from actually producing a good anti-aging pill or anything like that.

But the sheer amount of change in the scientific understanding is tremendous. Now, you can’t find anybody around this anti biogerontology who says that aging is impossible to change. Because people are regularly doing that in standard experiments. The real question is, can we change it in the medical useful way? Now you’re getting into transitional research and that is much harder. Turning those earlier results into something that you can actually buy at the pharma store get us a treatment at the hospital. That’s quite often much tricker and much harder to predict.

Robert Wiblin: Is that because you get side effects?

Anders Sandberg: Side effects. You might have issues about delivery. You might have issues about how to actually put it in the right way. It can also have serendipitous discoveries. So right now, one of the hot discussions is about the anti-diabetes drug, metformin. So this is a standard drug. It’s on the WHO’s list of essential medications and it’s very safe. It’s been used against Type II Diabetes for a long time. It’s both useful against diabetes itself and kind of mildly preventative. And then people started noticing that actually the patients that got this seemed to be living longer, not just from the people who didn’t get the drug, but the events of healthy controls of the same age. That’s kind of weird but you did get less cancer and cardiovascular disease. So now when people are starting a trial to actually investigate whether metformin could be something that reduce age related diseases.

This is not an anti-aging drug per se. But it seems to slow down some processes somewhat related to aging. And this is a drug that is kind of well tolerated, safe and widely used. There might be quite a lot of other things like that. So in the lab, we have a lot of really interesting promising things. The problem is of course, bringing something from the lab into practice is very tricky. We had various promising interventions against cancer that never got anywhere.

When I was young, people were talking about interferon, that’s a great solution. And then it turned out actually it wasn’t very easy to do. But when we found various antibody therapies but the really, really big, but they were total surprises because back in the ’80s that technology wasn’t even conceivable. And now you can do it fairly easily and make some very expensive medications. The next step might be to turn them from expensive medications to very cheap medications. Which, again, has less to do with science and much more with economics, patents and the industrial production and how you set up healthcare systems.

So, my argument is that, yeah over the past 20 years the idea of slowing aging has gone from total crazy stuff, no one in serious mainstream research would touch, to – that might be phrasing it a bit radically. But if you look at the whole marks of aging paper, you find interventions against different hallmark and it actually is starts to look more and more like biomedicine is on the track of figuring out way of affecting aging.

Robert Wiblin: An interesting thing is, if you have a condition that is very likely to cause your death, then health authorities will often allow lower safety testing of drugs. So if you have last ditch effort to save someone from cancer, then you’re permitted to take drugs that would be regarded as not safe enough for someone who is healthy to take. Just as preventative medicine. I guess, in a sense, we’re all dying all the time and people who are close to the end of life, or probably close to the end of life because of aging, maybe we just shouldn’t worry too much about the safety of these drugs. Because, the alternative is that they’re gonna die pretty soon anyway.

Anders Sandberg: Yeah, yeah. So if you’re young, it’s probably a stupid idea to take an experimental drug. If you’re getting to be middle aged like me, now at least I’m seriously considering, maybe I should be taking metformin. It seems to be safe enough and I don’t have any the kidney conditions that would be bad for. If I were in my 60’s now I might actually say, wait a minute, my mortality rate is started to pick up, maybe I should actually try rapamycin or even the more radical senolytics. It’s still a risk, but if I’m in my 60’s I’m already taking a risk every day by being a live.

So, yeah I do think that we might want to do this experimental interventions. Generally when thinking about human enhancement, my view is that we’re doing too little of it. And we’re also doing it in the wrong way; in the sense, that individuals are kind of ignoring rules and doing it privately without sharing the information. And that is a real problem. We could learn so much if we actually got the data from the people who are trying enhancement. We don’t know, for example, whether students who take cognitive enhancement drugs actually get better grades. That would be really useful to know. It wouldn’t surprise to me to find out that many of these drugs actually don’t help grades and actually don’t help learning very well in the way they’re being used. But we can’t even find out, because nobody has there to do the survey.

Similarly, for an anti-aging. It might actually be a very smart idea to allow trials, allow people to experiment. But on the condition that they also allow us to monitor so we get data about what works, what doesn’t work. Overall I think our society has ended up in the wrong kind of risk minimization. So the idea is, “let’s make sure nobody does anything harmful.” But that also means we’re not learning anything so we can’t actually do the right thing or avoid the harmful things. Because a lot of activities we’re doing are harmful but we’re not even measuring how much harm we’re getting.

Robert Wiblin: Yeah, I’m planning to start taking metformin in my 40’s and I’ll put a link to an analysis that looks at the literature on the effects that it has and estimates the pros and cons and suggests that the optimal time to start taking it is in your 40’s. Because before that, the irritation of taking it outweighs any benefit. But after that point it seems worth it.

Anders Sandberg: Yeah, and I think we can continue that analyze it for many other things. So there might be some things that you should never try until you’re kind of 10 minutes until you’re croaking. Other things that, yes, this is well tested enough that you might want to do it. But I think we should also pay back to society by sharing our data. And that might need to change the rules of how we do medical trials. We might need more observational trials.

Robert Wiblin: Yeah. Years ago I did a bit of an analysis of the best supplements to take if you’re in your 20’s, which I think is still reasonably good. There hasn’t … There’s been some new research which has changed it a bit. But the recommendation seems fairly solid five years on. So I’ll stick up a link to that as well. Yeah, I would be very interested to know what drugs you should try on the day before your death. Because, I suppose opiates, most people are trying it at that time.

Anders Sandberg: Well, if you just want to maximize the amount of pleasure of the time you got left. Well, if nothing else, yeah, that might be …

Robert Wiblin: Morphine, yeah.

Anders Sandberg: … A good way to go.

Robert Wiblin: There’s a whole bunch of other papers you’ve written. We’ve done quite deep dives into these previous ones. Maybe let’s skip through some of them and you can just explain the points you were making in them. A common argument that I hear when I say that I’m concerned about nuclear war is, it’s been about 70 years since we invented nuclear weapons and there hasn’t been a war yet. So, that suggests that the risk of a nuclear catastrophe is pretty low. But you’ve written a paper that … You’re involved in a paper called the Anthropic Shadow Observations Selection Effects and Human Extinction Risks, that says, maybe the argument’s not quite as strong as it seems.

Anders Sandberg: Hmm.

Robert Wiblin: What’s the argument there?

Anders Sandberg: So, we can start with big asteroid impacts instead. So imagine that the giant asteroid had hit the earth in the past and wiped out all life. The earth is just a molten mass. What’s the probability of us observing this? And the answer of course, is zero. Because, we would never evolved on a plant that was a molten glob of lava. We can only show up on a plant that actually has a surviving biosphere. So, if a universe was a really dangerous place where most planets were being hit by giant asteroids every year. Some planets would be very, very lucky in an infinite universe and they would be the only ones who had had observers.

So our existence is actually causing a bias. And this is what I call an Entropic Shadow. So if you look at asteroid impacts, like a big mass extinction one, probably precludes intelligent life from emerging over past few millions years after the impact, because the ecosystems are recovering. There is a lot of evolution happening. But it’s probably not likely that you get intelligence in that time. So, that means we shouldn’t expect to have a giant meteor impact very recently. They can only be very far away in historically. Similarly, for supervolcanoes, if you have a supervolcanic eruption that causes a climate disaster and makes most populations small, you should not expect yourself to be close to that. In fact, you should expect most of them to be fairly far into the past.

So this means that the record of nature impact and supervolcanic eruptions is going to be a bit misleading when you look at it because your existence is actually influenced by the past situation. You get actually get an observer selection effect. This is a bit like billionaires. If you have a club of billionaires and ask them to think about the decisions that led them to become billionaires, they’re going to have a tremendously biased story about how they ended up there because in many cases it was pure luck, but they will have another story because they are selected for having been successful. The unsuccessful billionaires, the people who never became billionaires, who lost all the money before joining the club, are not the ones being asked. We see the same thing with hedge funds. Most hedge funds you hear about are doing better than the market because the ones that don’t do better than the market quietly disappear from the prospectus, and nobody talks about them.

Now, this has some disturbing consequences for nuclear war. 70 years of nuclear peace means that, well, maybe the world is really safe. Maybe actually political decision makers are very safe and safeguards are really good, or it might just be that we’re one of the few planets where nuclear weapons exist that has been really, really lucky and have some surviving observers. So my paper’s about the question, “Can we tell whether we live in a safe world or a risky world?”

So if you do the calculation just based on the idea that there is some probability per year of a nuclear war, and it’s between zero and one, and the first year after getting nuclear weapon, we have a uniform distribution, and then we don’t have a nuclear war so we can adjust that distribution, and every year we observe peace, we will get a new and a posterior distribution. In that way we can make a rough calculation, which suggests that the risk per year right now is between 0.1 and 1%, which is disturbingly large.

Robert Wiblin: How much? Sorry, say that again.

Anders Sandberg: Between 0.1 and 1%, so that’s-

Robert Wiblin: Okay. A year. Wow.

Anders Sandberg: Yeah, yeah. That depends a bit whether you want the mean or median of a probability distribution. Obviously it can’t be 50% in the simple model because that’s ridiculously unlikely that you would get 50 heads if you flip a coin, but still, this doesn’t take into account the fact that if there had been a nuclear war, we wouldn’t be doing this podcast. I wouldn’t be writing that paper. So the question is, “Can we analyze this bias?”

One way of doing that is to look at near misses. So the most famous near miss was 1983 when on September 26th when Stanislav Petrov did his heroic non-action of not launching a nuclear war. So the systems in the Soviet anti-missile defense were telling him that the Americans have launched a preemptive attack, and he was kind of expected to confirm that and to make the call that would most likely trigger a retaliatory strike. He didn’t, and we’re here thanks to that, but when you start looking for these near misses, you get a horrifyingly long list. In some cases, very ridiculous.

During the Cuban missile crisis, besides the most well-published ones, there was one where an animal, a bear, was climbing a fence outside a military base near Winnipeg. That fence had an alarm and it was connected to a nearby air force base and it was misconnected to the general scramble alarm. This was right during the Cuban missile crisis, suddenly the alarm goes off, and the bomber pilots all know obviously what’s going to happen, so they’re rushing to their planes and trying to get up in the air just because an animal tried to climb a fence. Fortunately, somebody, and I really wish we knew his name so we could name a room after him around here, realized what was wrong, rushed into a Jeep, and drove out to the starting street and kind of blocked the planes from lifting.

So you have these near misses, sometimes technical, satellites claiming things or the computer tapes making the NORAD computer see a missile launch. Sometimes human mistakes. So now if we live in a very dangerous world, should we expect to see a lot of near misses or a few? I do a mathematical model, and the details are maybe a bit too boring to bring up in a podcast, but basically I have a markov-chain model, and if there is a state that if you get in there, you’re dead, and you can never observe anything leading up to that, you’re going to get a bias. Essentially it’s going to have a weird effect of deflecting states away from dangerous states.

So now we can make a plot of when did these near misses happen, and how many nuclear missiles could have been launched? Fortunately, it seems that there is not a very strong effect. It seems like actually the nuclear system is relatively stable. That doesn’t necessarily mean it’s safe at all. In fact, it’s kind of disturbing how many problems have been on different levels and how irrational some of the actors with nuclear weapons are, so it certainly isn’t implying that we’re totally safe, but it does show that this effect, this entropic selection effect, is relatively mild, which is maybe a bit sad for me because I want to have a cool paper to publish, but that’s my co-author point that, well, it’s kind of good news for mankind.

Robert Wiblin: Yeah. The original risk was 0.1% to 1% a year. When you adjust for this entropic shadow effect, what do you think it is? Is it twice as high or … ?

Anders Sandberg: I think it’s probably not even twice as high. I think it’s still on that order. So saying that it’s one chance in 1,000 to one chance in 100 per year sounds like it’s not very big, but of course nobody would get on an airplane if they were told that, “Well, there’s one chance in 100 that this flight will crash.” So it’s still disturbingly large. I think most political decision maker would also say, “Yeah, yeah, yeah. We wouldn’t allow it to be that large on our watch,” but from an outside perspective actually it is problematic. We have created an infrastructure of destruction, which is tremendously dangerous, and we have a big trouble dismounting it.

Robert Wiblin: So just to be clear, you’re saying there’s a lot of near misses, but that hasn’t updated you very much in favor of thinking that the risk is very high. That’s the reverse of what I expected.

Anders Sandberg: Yeah.

Robert Wiblin: Explain the reasoning there.

Anders Sandberg: So imagine a world that has a lot of nuclear warheads. So if there is a nuclear war, it’s guaranteed to wipe out humanity, and then you compare that to a world where is a few warheads. So if there’s a nuclear war, the risk is relatively small. Now in the first dangerous world, you would have a very strong deflection. Even getting close to the state of nuclear war would be strongly disfavored because most histories close to nuclear war end up with no observers left at all.

In the second one, you get the much weaker effect, and now over time you can plot when the near misses happen and the number of nuclear warheads, and you actually see that they don’t behave as strongly as you would think. If there was a very strong anthropic effect you would expect very few near misses during the height of the Cold War, and in fact you see roughly the opposite. So this is weirdly reassuring. In some sense the Petrov incident implies that we are slightly safer about nuclear war.

Robert Wiblin: Is there any way of making that intuitive? So we get a near miss and it doesn’t happen. So is then the logic that the final step must be very rare, that in fact even when you get a near miss just people refuse to launch anyway?

Anders Sandberg: Yeah. I think that’s a good way of looking at it, but this is of course one of those weird anthropic arguments, and I think the probability of us making a mistake in this kind of argument is tremendously high. This is a very weak update. This is not something one should be basing too much on. I certainly wouldn’t want to bring this over to Geneva for some disarmament talks. I don’t think it would convince anybody. I think the important part is however to understand that the dynamics of a near miss is already saying quite a lot of interesting things. We do see that mistakes are being done in this complex technical system and propagating frighteningly close to an individual human being able to decide when to press a button, and that in itself, that fact ought to make decision makers aware that, “Hmm, we might want to update this and make it safer.”

Robert Wiblin: Okay. Well I’ll put up a link to that paper, and I might have to have a read at it again to fully understand it, but our listeners can do that too if they’re interested. Another idea that’s come out of the Future of Humanity Institute is the Unilateralist’s curse. Do you want to explain the concept there?

Anders Sandberg: So the most common of the Unilateralist’s Curse is when you have a group that has some shared secret, and the question is, should you reveal this secret? It’s enough that one individual tells the world about it, then it’s out, for good or ill. Now, this might also apply of course for technologies. It might be, “Should I release this genetically modified organism,” or maybe there’s a gene drive to wipe out the malaria mosquito, and it’s not entirely clear of course whether this is a good or a bad thing, so I might do my evaluation and do it if I think it’s a good thing. It’s enough that one individual thinks it’s a good thing for it to happen.

Now if it’s actually a bad idea to reveal the secret or release the mosquito or do geo-engineering, then of course a rational agent will not do it, except of course that sometimes we do make mistakes. So even if all agents are trying to be rational and trying to do the right thing, the more agents you have, the more likely it is that somebody is going to be that guy. So this leads to this problem that in many situations you have a large number of agents that can’t communicate and coordinate with each other but they might do some action that affects the world, and this on average would happen much more often than we would wish it to happen.

Robert Wiblin: So perhaps a case of this that might actually arise for listeners would be imagine that you have an idea, and it’s not that difficult an idea to come up with. It’s the kind of idea that you think many people have probably independently come up with, but then you look and you find that no one’s written it up. Should you write it up yourself?

One possibility here is that many people have thought of it and have all decided not to write up the idea, because they think it would be harmful if this idea was spread around. The fact that you’ve had an idea and that no one else has said it and it seems like it shouldn’t be original is an indication to you that even if, to you, it seems like a good idea to publish the idea, because so many of the other people in a similar situation have decided against it, you should be very cautious about doing so, and think, “Well, maybe I’m wrong in thinking that it’s a good idea to write this up.”

Anders Sandberg: Exactly. In our paper we argue that this situation actually leads to a kind of principle of conformity. You might actually want to be more conservative than you would like to be normally when you realize that you are in this kind of unilateral situation. In many situations other considerations might apply, but if your situation is like this, then you might have a reason to be much more cautious.

This is, of course, tremendously annoying. You can’t see a real reason where there could be any danger perhaps. Maybe you are just being irrationally cautious about it. Depending on what the topic is, you might also modify this. When you do view engineer with atmosphere, obviously that’s going to have a global effect. Hopefully, people will be aware, but the cost of being wrong here is so big that we might want to be much more cautious. Not just more cautious than if you just made it alone, but also given this knowledge that others might also be considering it. About revealing maybe a spoiler about the book or a movie, the cost might not be that big.

Of course communicating and coordinating with others is another way of solving this dilemma. You can’t always do this, especially when it comes to information [inaudible 02:01:42] this could be a real problem. If your idea maybe is dangerous in some way, maybe asking other people about it is the worst thing you could do. Again, you need to understand a bit more about the situation.

In many situations, it might also be that you don’t even know who your peers are. Who are the other people who could do this? You can’t even easily ask them. If you can pull the information, a joint decision can typically be much better. In the paper we explore various more or less silly ways one can improve on this. In general we want to build institutions that allow us to have some trusted third party or some way of comparing ideas without necessarily leaking. Some good way of making joint judgments. If you can’t do that, then sometimes you need to be more cautious.

Robert Wiblin: It seems almost exactly analogous to the winner’s’ curse in auctions. That’s the phenomenon where, if you are auctioning off, say, a house or a company, someone could buy a company that has particular prospects that are hard to estimate. When people began doing auctions of this kind they would usually find that the winner did terribly. They won the company and then lost money overall, because the company wasn’t worth as much as they had thought.

Then people figured out, well, what’s going on is that a whole lot of different people have tried to estimate how much they should pay for this company and if you win, that means you estimated the highest value, which suggests that you were incorrectly optimistic. Everyone else thought that it was worth less than you did and so you are probably wrong. You are probably overestimating how good it is. Now when this kind of situation arises, people will produce their estimate of the value of the company and then bid less than that. They have to bid substantially less to avoid this winner’s’ curse phenomenon. Then this kind of reflective equilibrium where everyone makes their guesses and bids a particular amount that’s less. Then on average people pay about the right amount. Is that a good analogy?

Anders Sandberg: That’s a perfect analogy. In fact we named the paper, “The Unilateralist’s Curse” in order to riff on the winner’s’ curse. In our case it’s more that if you think something has a certain value it’s good to release the GMO or do the geoengineering, you should discount that to some extent based on how many others are considering doing the same. If you are okay with somebody else getting the glory or blame for doing it of course, you might actually want to, if it’s a large group, you might say it’s very likely somebody else is going to be that guy. If you have doubts, you should be more conservative.

Robert Wiblin: Yeah. I guess if you’re part of a community and you’re thinking of launching a project, and you know most other people in the community think that it’s a bad idea, you probably shouldn’t do it. I guess that’s one of the lessons.

Anders Sandberg: Yeah, which is, of course, tremendously annoying to somebody like me who’s both optimistic and aware that I’m of course overestimating the value of a lot of things, and also kind of like progress. I like doing things. I’m having this bias towards action. Sometimes we need to recognize that that is a bias that actually leads to worse utilities than they actually create.

Robert Wiblin: Than doing nothing…

Anders Sandberg: Yeah. Doing nothing with the expectation that somebody else will do it. That sounds rather bad, because in many everyday situations … of course this is why nobody’s doing the dishes or cleaning up the living room, because everybody’s waiting for everybody else. For dangerous things that have this unilateralist property, you actually should do it.

Robert Wiblin: Sometimes we shouldn’t feel too bad about being conformist and lazy. Let’s move on to another topic that you’ve written about, which is natural risks of human extinction and perhaps threats that aren’t so often talked about. Very often people talk about disease, they talk about artificial intelligence gone wrong, they talk about nuclear war. What else is there? I think solar flares is one that you’ve mentioned?

Anders Sandberg: Yeah. Solar flares are probably not in themselves likely to wipe us out, but they could be the thing that crashes our current civilization with very bad effects. The basic problem is that for some occasion that has an eruption of energetic radiation and also big globs of plasma that hit earth’s magnetic field. Earth’s magnetic field will wobble as a response. Now, if you remember from physics, if you have a magnetic field that changes, that induces currents in conductors. We have created this mesh of power lines across the world, which works as antennas picking up changes in earth’s magnetic field. A big solar storm can actually induce currents that break transformers and crash the power grid. This has historically happened a few times. There was a famous one in the 80s in Quebec, where a solar storm actually crashed the power grid, and in the 1800s was some event that is called the Carrington Event, which, back then of course there was no power grid, but there were some telegraph lines. They got short circuited. There were sparks flying from them and there was auroras down to the Caribbean.

Had that happened today, a lot of our power grids would have broken. This is disturbing because electricity and energy and distribution, is of course a single-mode failure that could crash our society. We need electricity not just to get lights, but also to communicate, in order to pay for things, and in order to coordinate. If you’ve got widespread blackouts, that could really cause tremendous damage.

Carrington Events are probably not that rare. One guess has a return time of about 150 years. Given that this happened in 1859 or so, that’s kind of disturbing. We probably need to harden our power grid. It’s telling that after Hurricane Sandy, Wall Street was without power for one or two weeks. This is the richest spot on earth, and they couldn’t get a transformer to replace the transformer that had been flooded by the hurricane. The lead time of getting transformers is really long. We need to update our systems. The good news is this is not an unknown problem. There is reports, I pointed out, that this is seriously scary. Lloyd’s emerging risk group have written a report telling the insurance industry that this is scary. Now hopefully the insurance industry is telling companies and states that this is very scary. We ought to fix this. It’s still going to take a long time because we’re talking about big capital investments, but it shows that sometimes natural processes can come and really mess up our day.

Robert Wiblin: How are they preparing for this? Are they hardening the transformers, or do they have spare transformers lying around at every … so that you can always just replace it after a solar flare?

Anders Sandberg: Unfortunately spare transformers are fairly expensive, so I don’t think people are going to have that many. It’s more about setting up the power grid so you can actually have good brakes. You also want to observe the sun much more. You want to have solar observing satellites that can detect coronal mass eruption early and actually make sure that we react quickly. It’s a bit like those safeguards you have in the Japanese high speed trains, where you have the measurements of earthquakes. If they detect an earthquake, they send a message to the train. It starts braking before actually the shaking reaches the train.

Robert Wiblin: We could have satellites that would detect a solar flare coming towards us and then we would shut off the power grid, basically.

Anders Sandberg: Well, yeah, not so much shut of the power grid as putting it into a safe mode. That might mean, perhaps, temporary blackouts, which is inconvenient, but so much better than getting something that could last for weeks to months.

Robert Wiblin: More than months, I would think, if this was a global phenomenon, if electricity grids went down everywhere. We might not even have the factories functioning to replace the parts that have been broken.

Anders Sandberg: That is an interesting problem. We actually don’t have a very good understanding of this kind of systemic risk that happens in complex supply chains. The insurance industry, they wish they had a good model of supply chain risk, because they’re doing insurance against business interruption. They certainly have data about their payouts, but we can’t actually model the causes very well.

When you think about the risks to our society, it’s not always easy to detect what’s important. For example, in the early 90s, there was a fire at a chemical company in Japan. They were manufacturing a resin that has been used to put silicon chips inside the capsules when you make memory chips to put into computers. This resin is rather specialized. The annual production and consumption is on the order of one ton. That factory and its storage was destroyed. Suddenly the price of computer chips went through the roof because you couldn’t actually mount them into the capsules. Nobody had ever really been thinking about that kind of resin as a very important product. We probably have a lot of other linchpin products in our society that we don’t even know about. Solar flares are kind of obvious because they affect something that is obvious, in the case of power grid, but there might be other things that we are very sensitive about them that we need to find.

Robert Wiblin: I’ll put up some links to some actually really good statistical analyses of this issue of solar flares and how often they happen and how bad they would be. Are there any other things like this that many people won’t have heard of but actually are global catastrophic risks, even if not existential risks?

Anders Sandberg: If we look at the natural disasters, there is a wide range of disasters happening on a lot of time scales. On one hand, you have astronomic events, which are, of course, very dramatic. The supernovas and asteroid impacts make glorious pictures and also look like a very neat form of disaster, because we have a sudden onset that you can kind of think about the consequences, but of course they’re not very likely. It’s believed that the rate of getting a mass extinction because of nearby supernovas is about one in every hundred million years of so, which is not that much. Asteroids are certainly hitting us all the time, but most of them are pretty small.

You have more obscure ones, like gamma ray bursts, which is essentially like supernovas, but now you get more direct irradiation. Which means that if it’s pointed right at you, you’re in deep trouble, over thousands of light years. Of course, most of the time, they will point to some other direction. There are some interesting theories, but maybe in the past, we have more supernovas and gamma ray bursts and that’s why we didn’t get intelligent life until now, because most planets got occasionally kind of sterilized by this up until for the past few billion years, when the universe has quietened down a bit. That still means a big spread of possible ages or biospheres.

Getting over to geo-physical risks, it’s worth recognizing that we’re living right now in a slightly precarious interglacial. We’re still in the middle of an ice age, except that the ice ages do take breaks for a few thousand years, when they actually tore up. When I was growing up, my dad, who was about as gloomy as I’m optimistic, pointed out that, “Andres, you know that on the average is going to be a kilometer of ice on top of this place where we live.” That’s kind of what an ice age is. They pointed out that back in the 70s, scientists were seriously considering that maybe the interglacial is ending and we’re gonna see a return of an ice age.

My dad, being very good at being gloomy also pointed out there’s some other new research suggesting that carbon dioxide is increasing rather rapidly, and it might actually lead to heating so all the ice melts and then we’re gonna have water up that particular back, okay? We don’t actually know what our effects on climate are going to do to interglacial. That is an interesting thing as a kind of long-term risk, because if we went back to the ice age temperatures and precipitation, we would have a dramatic change in where we could live in the world.

Now that’s a sort of slow risk. These dramatic fast ones would be hard for us to handle because we suddenly would find that the ozone layer got wrecked by a supernova. If the ice age really starts going, we probably have decades or centuries to move and find some solution, so that might mean that they’re actually not much of a global catastrophic risk to us now. They would be, maybe a few centuries back, because we couldn’t see them coming and we wouldn’t actually have much ability to handle it.

Robert Wiblin: Kind of sounds like we’ll be pretty lucky to make it to that stage, anyway. By that stage, hopefully our technology will be much better and people will have thought about this a lot more.

Anders Sandberg: Oh, yeah. Oh, yeah. Our ability monitor the geo-system and to see where it’s going is getting better all the time. While we might be complaining about the state of our climate models and the uncertainty about the decision we need to do from it, and even how rickety some part of the software codes … We’re still doing fairly well about. We’re understanding some of the big feedback loops. I think if there is any natural source of action will get us, it’s going to be something totally unknown. It’s worth recognizing that we have been discovering natural disasters that were unthinkable from before.

After all, asteroid impacts were regarded as hoaxes up until a few hundred years ago by the scientific community because, obviously, there can’t be rocks up in the air. Rocks are on the ground. The idea that something can fall down took a long while to actually permeate. Super volcanoes have been known for only a few decades and they’re probably a few other risks that we’re going to be surprised by. Still, I do think that most of the risk to humanity’s coming from anthropogenic sources rather than natural ones.

Robert Wiblin: Let’s move on to some of the more amusing things that I’ve seen you writing over the last couple of years. Two years ago you wrote or you contributed to a paper, “Should we campaign against sex robots?” Why did you write that.

Anders Sandberg: It was mostly because I was so annoyed about a line of argument about the campaign against sex robots were using. They were mirroring themselves on the campaign against killer robots, which I think is worth taking seriously. Lethal autonomous weapons are a serious issue. I think it’s worth trying to figure out a way to slow down development on the limit application of them. This campaign against sex robot was, instead, arguing that the sex robots represent the wrong kind of relationship between people and will lead to objectification of women and children.

Strangely not men, and as a gay person, I find that a bit annoying. I certainly wouldn’t want the female sex robot. I wrote this blog post but then I actually turn it into a chapter in a new book about the ethics of sex robots where I was comparing the argument. First, of course, we have this basic issue of is this overall argument that sex robots leads to objectification of people. Is that right? I think it’s wrong, but it’s a big complicated debate. I think there’s another important question and that is: can you stop something useful by stopping sex robot compared to stopping killer robots? The goal for stopping sex robots is something like objectification of people is bad. If sex robots leads to more objectification they should stop the sex robots. This, of course, doesn’t get to what you actually want to avoid. There are other ways of making people respect each other as people better. That seems to be where you actually get the biggest bang for the buck. You want to really go and fix that. Stopping a technology that might make thing worse is a relatively weak way.

Now, if you think about killer robots, being killed is a bad thing. We want people not to be killed and if you stop killer robots, you at least remove that source of the people getting killed. You might certainly want to stop wars in other ways too, but killer robots are in themselves doing something that is harmful. I think there’s a fundamental difference in that. That leads to this overall argument that if you think sex robots are wrong, you should give good arguments for that.

You can imagine conservative arguments but it’s non-reproductive sex, or you’re having sex with something that’s not of your own species, or sex is immoral. Of course, the campaign against sex robots doesn’t want to do those kind of conservative arguments, but they are much more strong as a moral argument – even though I think they’re totally wrong – than this indirect objectification argument.

Robert Wiblin: Just to make sure that I’ve got this right. You’re saying with the killer robots, it’s the fact that they’re killing in itself that it’s bad. Whereas, with sex robots people having sex with sex robots you and these campaigners don’t think is wrong in itself. They only think it’s wrong because of secondary cultural effects that it might have where people become less empathetic towards other real humans because they’re able to mistreat robots.

Perhaps, something like Westworld, that phenomenon, where people become callous towards others. You’re saying we should focus on that second step, or you should find other ways to make people kind to one another and it’s not actually necessary to intervene with the sex robots directly because they’re only one factor among many that has indirect effects on this.

Anders Sandberg: Exactly, that’s beautifully put. One can try to see, there are a lot of technologies that affect how we treat each other and if you say that we should be banning all the technologies that make objectify each other, that’s going to be a very, very long list of technology.

Robert Wiblin: TV.

Anders Sandberg: Yeah, TV. A lot of social media and the problem is, of course, many of these ones also have positive effects. It’s getting very, very complicated to judge. Now, in the case of killer robots even in a just war it’s bad for a soldier to be killed. Even if it was on the wrong side of a war, it’s still bad for him that somebody shoots him regardless of whether he’s a human or a robot. But if we remove the killer robots, we’ll have removed at least some of the killing. Now, there might be arguments again for why killer robots might actually be a nice thing.

There are some people saying that, “Yes, they can be programmed to be more ethical,” which I’m doubtful of. You might also say, “Yeah, but they might not have human emotions that actually lead to soldiers behaving really badly.” It’s a more complicated discussion. Generally, of course, sex robots are a fun topic. I was asked to give a talk to a boys school two years ago about this topic and everybody wanted to listen on it. Which, of course, gave me a great excuse not just to talk about ethics, but also things about consent and what do we mean about that.

How can we even tell when something is worth having emotions about and can a machine give consent? It was interesting to see how the kids got interested in rather deep philosophy that way.

Robert Wiblin: Yeah, I’m pretty sure that this topic is going to make it into the hook for this podcast, even though it’s a fairly brief part at the end of the show.

Anders Sandberg: Well, it is a little bit like a flypaper topic because it has this eternally interesting topic sex inside it. Which is interesting on its own because we are the kind of creatures that really care about sex. Evolution has given these properties even if we then become philosophers and think that the more important abstract things, this meaty stuff really gets us going.

Robert Wiblin: Maybe we need to reframe some of our other issues. Like, what effect would the nuclear apocalypse have on sex? What effect would a pandemic have on sex? Then people will actually read about this stuff.

Anders Sandberg: The Fermi Paradox in the bedroom.

Robert Wiblin: So just to jump back a moment to the argument, I’m not sure that I would be completely convinced if I was one of these campaigners. So imagine that you thought the effect that sex robots would have on culture and on kindness was large, larger than, you know, most of the other things that we’re thinking about like, you know, violent television shows. So you think it has a particularly large effect. You also think it’s maybe possible to ban these preemptively in a way that it’s not really possible to ban television shows now.

And perhaps you also just don’t value the enjoyment that people would get from sex robots directly all that highly. Perhaps, you know, you just don’t value pleasure or you don’t think sexual pleasure is something that we should be pursuing. Or you just think, in fact, people won’t enjoy it very much, perhaps because it will damage their other relationships which they’re getting a deeper fulfillment from.

Anders Sandberg: Yeah. So I think one issue is that people are probably underestimating the value of a really functioning sex robot. Now, I’m also thinking people underestimate how hard it is to build the darned thing. When you start thinking about the amount of mass and how fast it has to move close to an unprotected human, it turns into a horrific engineering problem and an engineering safety problem. I’m not certain it can be resolved, actually.

The real question is, of course, could the sheer existence of a sex robot actually be bad for our society? And you can certainly imagine other kinds of robots that would be bad for society to have. I have argued that it’s a bad thing to have killer robots in our society and you could imagine theft robots, robots programmed to go around stealing stuff. We would obviously be better off not having those.

In that case, it’s also because they’re directly doing something illegal. But you can, of course, imagine other legal activities that we still don’t want to automate in society.

Robert Wiblin: Persuasion robots that are too persuasive or something?

Anders Sandberg: Yes. The problem is, of course, we’re working very hard as a civilization to develop the persuasion robot. There is big money invested in it. Fake news is a big deal. If you could automate fake news or automate the detection of fake news or automate escaping the detection of fake news, you can make a fortune. So, actually, we are probably very well on the way of getting persuasion robots.

That is, of course, also very scary because this might undermine a lot of important aspects of our culture. So maybe, actually, the sex is beside the point. Maybe even the killer robots are beside the point compared to the persuasion robots. But it’s also much harder to deal with because persuasion is a subtle activity. Maybe I’m persuading everybody listening now about some subtle things using subliminal messages, or my pronunciation or my philosophical views.

Robert Wiblin: We’ll cut that out in post-production for the episode.

Anders Sandberg: Perfect, perfect. Yes, just ignore the subliminals. However, it’s a very subtle activity. On the other hand, a sex robot or a killer robot, it’s kind of obvious what we’re doing. You can’t be discreet really about those activities. So they are also more easy to ring fence. It’s more easy to discuss that because you could argue that, well, this is not a persuasion bot. This is an information bot which is just doing really compelling education for our children. And you might even mean it because you are persuaded that you are right and your ideology, which you subtly put into the system, is, of course, what children need to have.

So we have a lot of powerful influences of automation and we really need to discuss and analyze them.

Robert Wiblin: You’re also skeptical that sex robots even would have a negative cultural impact, right?

Anders Sandberg: Mm-hmm (affirmative).

Robert Wiblin: Do you want to make the case for that briefly, and then we’ll move on?

Anders Sandberg: Yeah. I generally think that people are pretty good at nuancing their views because we are adapting new technologies for sex all the time. Basically, any new technology, people will first think about the original use and the second one how can I use this for something sexual? So printing very quickly gave rise to pornographic novels and the internet, of course, as we know, is for cats and pornography.

And of course people will be making sex robots too. Now, how useful are they to the average person? And I think that is complicated because love and sex are a fairly complicated thing for mammals like us. I quite often argue that we can divide them into roughly three subsystems. So one is the lust system about mating with whoever you want to mate with. And to some people, this is the primary thing. That’s really what we’re going for.

For most people the other two are equally important. And one is, of course, attraction. You need to find somebody you actually want to be together with. That is the falling in love part. And then you have the pair bonding system, staying together. After that first infatuation, that really rushing, romantic, Disney song part is over, you actually need to stay together for decades perhaps to rear the kids and have a family.

And these sub-systems can be differently active and different people put different emphasis. Some people just have this companionable, friendly relationship and it goes on for decades. Others are going in and out of passionate love affairs. Sex robots would just automate fulfillment of part of the lust system. And I think many people think that, well, that’s not the point because to me love is more about having a relationship to a person. And presumably a sex robot wouldn’t be that.

Once you can make a sex robot that actually is a person, now you have a really complicated moral dilemma. So now you haven’t solved anything. You’ve actually added a lot of complication. However, others would say, “Yeah, that’s great. I want to be together with my wife or husband but sometimes they are not around and then it might be very nice to have sex with somebody.” And given that we are very different, other people just stare at the first person and say, “What? You can’t do that.” We have a lot of social norms about it and a lot of religious norms. A lot of status and ideas.

But these ones tend to be updated. So right now we’re living in a kind of post-Tinder era where dating has become very much technologically mediated in a lot of subtle ways and probably transformed in ways we’re not even going to notice until a few decades down the road when people are starting to notice what kind of families you build when everybody met each other on the internet rather than meeting because of friends or meeting because of other arranged marriages.

So I do think the technology here has given us a lot more freedom. It’s just that we also get these weird, unforeseen effects which we’re also sometimes better off having.

Robert Wiblin: I guess I’m not inclined to believe that violent television or pornography has had massive cultural effects, or at least not massive harmful ones. ‘Cause it seems like the macro trends don’t line up with that. But it could have had small harmful effects and maybe human-like robots, the effects could be larger ’cause like it’s a better simulacrum of actually being a person and it could make us then, you know, it’s harder for the brain to distinguish between what’s real and what’s fantasy. So I guess it does seem like there could be reason for caution here at least.

Anders Sandberg: Oh, yeah, Oh, yeah.

Robert Wiblin: As with any technology, that, you know, it could be harmful in ways that don’t currently seem that likely.

Anders Sandberg: The thing I would be watching out for is if we could get supernormal stimuli that are really, really effective. So in ethology, the behavior of animals, it’s known that there are some stimuli that animals are looking for, for example, in looking for partners or looking for their parents, that are very simple. And if you just amplify them tremendously you get a much stronger response.

The classic example is the marking on the beak of a seagull that the chicks are using to detect their parent. If you just take a wooden stick and add three extra markings, they’re going to totally go for that instead of the actual parent. And you can, of course, imagine this happening with pornography and sex bots. In fact, if you think about the average pornographic actor or actress, they’ve kind of got rather exaggerated features.

And it might be exaggerated primary sexual features but also in the romance novels, it’s kind of exaggerated social status. They’re all various things we respond to. We’re also developing this art of detecting what people respond well to, and providing it to them.

One risk might be that it’s not so much the sex robots themselves that are dangerous, but that we get very good at actually finding things that people really want, even if it’s not tremendously good for them. This might produce various forms of interesting new forms of addiction. After all, we’re trying to do that with food.

It’s not, once we’ve gotten beyond the point of getting enough food to survive, we want to make it taste good. Now we’re making it taste good, look good, be ethical, so you really have a compelling reason to have it. If we take this to the limit for food, sex, and everything, we might end up in a world that is actually driving us more strongly than we might want. Because we would be tremendously distracted.

Robert Wiblin: Really good tasting food has made us overweight. And perhaps sex robots would make us overweight in some sexual sense?

Anders Sandberg: Well, maybe a really good sex robot would mean that we’re not going to leave the bedroom, which might have long-term effects for example on actual reproduction.

Robert Wiblin: Yeah.

Anders Sandberg: There are those people worried that exposure to pornography gives people unrealistic beauty standards. You don’t need to invoke pornography. Cause, well, most beautiful people you see on television et cetera. They already are selected among the people who really look and behave well. And then we better that with a bit of computer graphics and makeup. The end result is of course that we have a world of unrealistic expectations.

Robert Wiblin: It’s hard to be attracted to the people you actually meet in real life.

Anders Sandberg: Yeah. That also suggest an interesting opening. We might want to look at the virtues of actually functioning in this kind of world. What does it take to live in a media saturated world? What are the virtues you have, if you’re constantly surrounded by social media? How do you actually handle that you get all this super tempting stuff? I quite often have this problem with books. I find a lot of books that are really interesting. I only have 24 hours per day to read. What do I do about it? I can’t read all the delightful books. And this feel tremendously frustrating. My response is, of course to think, yeah, but it’s so much better than alternative.

A few centuries ago, it would be a big event in Oxford, that a new book had arrived. Most of the time, you only had to re-read the same 50 books that were in the library. Now, we have an overload of books. That’s actually a good thing. We need better ways of prioritizing. This is another reason I want life extension. My pile of unread books is growing. Eventually, I will get to each of the books, even if more books are arriving.

Robert Wiblin: Is that true in finite time Anders? What if you keep living and each year you had more books than you can read.

Anders Sandberg: Well, if only have a finite amount of time, I will eventually run out of time to read the books. This is why proton decay is the ultimate limit of my reading ambitions. Mathematically however, if you had infinite time, then any book would be arranged in the queue and I would eventually get to it. Even if others were adding books at the faster and faster and faster rate.

Robert Wiblin: With the discussion of sex robots, we’ve got the hook for the episode. Let’s talk about some other things you’ve written. You’ve talked about sleep minimization as a potentially important priority, but also something that might have unexpected downsides. What have you written about that?

Anders Sandberg: This began one dreary Monday morning, where I was waking up. I had to get to a meeting. I was bleary, I was thinking, why do we ever stay awake? What if we could just sleep all the time? And then, after five cups of espresso, as I was jumping to the office, I was thinking, “Why do they ever sleep? Why do we waste time with that, when we can be up, about and do useful things?” That led to thinking a bit about sleep and enhancement.

It’s interesting that we haven’t been thinking that much about sleep. We spend about a third of our lives asleep. If we can sleep better, that would be presumably have a tremendous effect on human life. Indeed, when you think about how bad, insomnia is for life, quality, it seems that yeah, we should probably be pursuing improvement in sleep quite a bit.

Robert Wiblin: Yeah, the other thing is, if you can shorten someone’s sleep needs by about an hour, then you extend their life, or their waking life by about three years, which is enormous.

Anders Sandberg: Yeah. It seem like there’s a tremendous amount of value here. We also don’t understand sleep very well. Part of that is, because it seems to be a really deep problem. There are loads of theories about why we sleep. Many of them seems to be not enough. For example, a very popular theory is that you’re doing memory consolidation during sleep. The short-term information you learn during the day is stored temporarily in the hippocampus, and then moved into permanent storage in the cortex during sleep.

There are other versions of other memory management things. This doesn’t explain why lab animals that are sleep deprived actually die. They would maybe have a memory crash, but why do they die? It seems that sleep is also important for homeostasis. It seems to be important for getting rid of waste product from the brain. There are a lot of other reasons, but we don’t fully understand sleep per say.

I have an old paper with Nick Bostrom about the wisdom of nature, but looks at proposed enhancements and the ways of choosing between them. One way of thinking is to say, “If this is such a good idea to change, why haven’t nature already done it?” Quite often, the answers fall into three categories. One reason might be, the trade offs have changed. In our ancestral environment, nutrients were relatively scarce. We couldn’t have brains using more energy, simply because we would starve to death.

Now, nutrients are not scarce. We have too much of them instead. Actually, that enhancement might be a good idea. There might be other conditions that have not changed. In that case we should be careful about changing. There are other things of course that evolution might not be able to do. We can’t evolve it, because of the biological limitations. You can’t evolve a radio transmitter, or a method of making diamond bones.

In the case of diamond, molecular bones are too strong to be made using the kind of enzymes we have. And then, of course sometimes human values diverge from evolutionary values. Now, if you find something that is very preserved in evolution, you should suspect it’s important, even if you don’t understand why it’s going on. And sleep seems to be one of these things. It seems that, actually being unconscious even though predators might get you while you are asleep, is still worth having. So we don’t know why really we sleep. But it seems that messing with it, removing it might actually be a very bad idea.

At the same time, there seems to be a high degree of value in improving sleep. At the very least, we should make sure that we can sleep well, because it affects our function and health tremendously. People who sleep too much and too little, have much higher mortality.

Robert Wiblin: That’s unclear if that’s causative.

Anders Sandberg: Yeah. It’s a complicated issue. Depressed people sleep a lot. You can actually make, sometimes make them less depressed by forcing them to sleep less. They’re not happy about it, but they’re less depressed. They’re probably more angry.

Robert Wiblin: They’re angry, but animated, I guess.

Anders Sandberg: Yes. Similarly, of course, sometimes if you have a lot of pain, you can’t sleep. If you try to control this, there still seem to be these remaining effect. It’s also hard to judge, because different people have different sleep needs. Some people are like me, need exact eight hours per night. That’s kind of my optimum. Others are very variable.

Some people take pride in how little they sleep. Quite often to build a personal myth about themselves. Of course, there is a phase a lot of young transhumanists do when they get into university of trying polyphasic sleep. A lot of my friends, me included, have been trying this idea.

Yeah, you’re awake for four hours, when you take a nap for about an hour. And then you’re awake for four hours. That way you’re active throughout the day. This typically lasts for about one or two weeks, before you realize, I actually need to have a functioning social life. This is not compatible with being around other people, who running the normal rhythm.

Now, the interesting part here to me, is rather that could we modify sleep? Yeah, we probably can. We have stimulants that can do that. We probably should find better ways of controlling sleep, because actually insomnia is a really bad thing. Shift work disorder is also causing a lot of accidents and trouble. We should probably expect that is going to be tough to optimize sleep. I think there is a tremendous amount of value. This is a very neglected area.

A third of our lives that is not being investigated properly. I think there’s plenty of room for enhancement. Yes, think about the pleasure of going to bed when you’re really tired. That’s really great. Waking up rested also really great. The state in between … Actually, most of our dreams are fairly negative. If you wake somebody up at random and ask them about the state, you find that most of their [inaudible 02:38:51] is actually fairly negative. We don’t remember it much. Maybe we want to improve the state in between too.

Robert Wiblin: Yeah, I’ve heard this. I guess, I don’t feel like it’s true of me. I feel like usually when I wake up, my dreams are close to neutral. At least, I don’t know, I don’t feel like sleeping is unpleasant.

Anders Sandberg: Yeah.

Robert Wiblin: Perhaps it’s just that whenever I wake up in the morning, I wanna continue sleeping. Makes me perceive that I am enjoying sleeping, even if I’m actually not.

Anders Sandberg: Yeah, I think that’s true. Our ability to perceive the sleep state itself, is actually quite limited. Many people also get surprised when they start comparing dreams, because they have beliefs about what’s possible and not possible in dreams, which are utterly dissimilar. Some people say, “Yeah, there are no shadows in dreams,” at which other’s say, “Wait a minute. I’m seeing totally a load of shadows.” Others are explain, “Oh no, there is no real color in dreams.”

“Oh, I see loads of colors.”

“Yeah. There’s no smell in dreams.”

“Oh, I experience smells.”

Our brains are working differently. Dreams in general, and even the deep sleep states, which is more a rehearsal of what we did during the day. They are there, but how they function for each individual is going to be tremendously different. Some people are lucid dreamers and can kind of takeover during sleep. I find it amusing to do. It’s not terrible useful. Other claim, “Oh, this way I can work and train and think, even when I’m asleep,” which sounds very effective, but also somewhat boring.

Robert Wiblin: Let’s do one last thing that you’ve written about. You’re in the habit of making predictions each year, is that right? At the start of the year you make predictions for the year ahead?

Anders Sandberg: I’m trying to do it.

Robert Wiblin: Yeah.

Anders Sandberg: I also that it is surprisingly hard to find good predictions to last for a year. This year I actually failed at doing it in the first two weeks of the year. This year I haven’t gotten the predictions, because I realized after two weeks, I already kind of now, we’re getting more information. Now it would be a bit unfair.

Robert Wiblin: Interesting. Okay. You did this last year, right?

Anders Sandberg: Yeah.

Robert Wiblin: How did you do? Were you over confident? Under confident? Did you find any patents?

Anders Sandberg: Generally, I found that I was a bit overconfident about the really likely stuff. A lot of things that I should be able to nail really well, I was a bit overconfident about. I was decently well calibrated on the kind of mid level questions, which is interesting.

Robert Wiblin: Just for listeners, you’re putting probably estimates on all kind of things happening by the end of the year.

Anders Sandberg: Yeah.

Robert Wiblin: And then you can see, you know … If you said that something was 50% likely to actually happen 50% of the time.

Anders Sandberg: I think it’s very useful to try to do this, and keep doing it, so you can get a sense of how much you can trust your own predictions.

Robert Wiblin: Do you think it’s changed your behavior, or made you better at forecasting things?

Anders Sandberg: A little bit. I think, in general it made me realize that asking the right questions might be actually much harder. Finding a good generic list of questions that I should estimate, that will come true or not true within one year, that also have a clear enough prediction, that is surprisingly hard. It’s actually worthwhile learning to ask questions better. This is a bit like evaluating futurists. If you read Ray Kurzweil’s predictions, how exact is he?

Well, it turns out, it’s quite often something that sounds like he’s making a very firm prediction, but when you read it and then try to compare, whether it actually true, it’s problematic. For example, if a singularity is near, he’s actually predicting autonomous cars for this time. Okay. Is it right?

Well, we have autonomous cars. I’ve seen them running around California, and the streets of Oxford. They’re not commercial. Is that a hit or a miss? Well, it’s a matter of judgment. And it quite often something that sounds like it was a very specific prediction, actually turns out to be less specific. Besides getting calibration right, I think learning how to make statements, is actually a surprising challenge.

Robert Wiblin: That are unambiguous, I guess.

Anders Sandberg: Yeah.

Robert Wiblin: Everyone would agree whether they’ve been accomplished or not.

Anders Sandberg: Yeah. You will probably when you try doing this, realize that, that’s super hard. Actually getting people to agree on what the state of the world is, is surprisingly hard.

Robert Wiblin: Yeah, I’ve often tried making actually monetary bets with people about, you know, where we disagree about what’s gonna happen. Mostly just because it’s fun. It is often very hard to, yeah, come up with an agreed outcome, that you both wanna forecast on.

Sometimes you’re forced to rely on, a third party both of you trusts, to just kind of say, overall, who was right in this case. That makes it kind of difficult to make these bets, cause now you gotta find someone to do it. Tell them, say what you think is gonna happen in some kind of thick way, so they can appreciate your perspective. Yeah, it’s sort of shame.

Anders Sandberg: I do think it’s important. I personally don’t like betting just because I don’t like betting per say, but I’m very much in favor of people making more bets on the future. I’m very fond of scientific bets, where various researchers make bets about whether you find supersymmetry, or Higgs Boson, or something else at a certain date. Some of them, of course are kind of symbolic and almost epical.

You have, for example the bet between Paul R. Ehrlich and Herbert Simon about the cost of raw materials, which kind of almost symbolized the battle between, the ecologist and the economic cornucopians. That one, at least as the standard story goes, of course, ends with the conclusion, ah yes, [inaudible 02:44:05] the price of raw materials going down. Except of course that had we waited 10 years more, Airalish would have won the bet.

Robert Wiblin: Yeah.

Anders Sandberg: The stories we tell can be quite different from reality.

Robert Wiblin: Yeah, I think if you look at a broader time period then, you find in 2/3rd of the time periods, that the prices went down, and about 1/3 they went up. They’re kind of is a somewhat longer term trend downwards, but it’s very volatile. You can certainly have extended periods, where the prices of raw materials goes up. That’s the kind of complicated story that doesn’t tend to get reported with, when you have these kind of mythical events.

Anders Sandberg: Sometimes the mythical events are also useful to be as a starting point for the further discussion. I do think we actually want to create more of these mythical events. Actually making bold predictions is good. I typically like to tell journalists who wants a timeline, where will we get human level AI, or when will we find aliens. This is, of course an absurd question. Actually, we can’t put a good number about it. I can give arguments about it, but any number is just going to be silly. Yet, sometimes it’s useful to make big bold predictions, but I think you want to make them more about mechanisms …

Robert Wiblin: Okay.

Anders Sandberg: Rather than an exact number of when something will happen.

Robert Wiblin: Well, we haven’t spoken that much about careers advice, how people could work at the Future of Humanity Institute. I’ll speak to one of your colleagues I think in the coming months. People who have really enjoyed this conversation and wanna research these kinds of topics can get some advice on what to study and how they can potentially end up being one of your colleagues.

Anders Sandberg: Yeah. I think my own main advice … I’m a very much of a generalist. I’m interested in everything. That is both a blessing and a curse. It’s also very useful to know a little bit of everything, because that allows you to know that, mm, I know where is knowledge about this problem. I can go and ask, or find some textbooks and dig it out, and apply it the main thing I’m working on. It allows me to tie things together. That is part of what we need at The Future of Humanity Institute.

The future is, by its nature a mix of a lot of factors, which makes it naturally the disciplinary. You need to have somebody around, who knows a little bit about, whether about solid state mechanics and a little bit of economics, and a little bit of philosophy. Even, if you’re not perfect at that. My career advice would be, make sure you read the introductory textbooks on randoms topics you’re not supposed to be studying. Read random things that you don’t know what’s going to be good for you. Later on, it might be much later, it could actually turn out to be useful. Typically, the cost of reading at least a few introductions, is not that high.

Robert Wiblin: My guest today has been Anders Sandberg. Thanks so much for coming on The 80,000 Hours Podcast Anders.

Anders Sandberg: Thank you. It’s been delightful.

Robert Wiblin: I hope you had fun listening to me and Anders!

If you did you can do me a big favour by telling your friends about the show and leaving us a review on iTunes.

The 80,000 Hours Podcast is produced by Keiran Harris.

Thanks for joining – we’ll be back with a new guest next week.

About the show

The 80,000 Hours Podcast features unusually in-depth interviews with people working to solve the world's most pressing problems. We invite guests pursuing a wide range of career paths - from academics and activists to entrepreneurs - to share their wisdom, so that you can better understand the world and have a greater impact with your career.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing keiran at 80000 hours dot org.

Subscribe by searching for 80,000 Hours wherever you get podcasts, or click one of the buttons below:

  • iTunes
  • SoundCloud
  • Stitcher
  • RSS

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.