Rob’s intro [0:00:00]
Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.
Welcome back to the new year! We’ve got lots of great episodes working their way through the pipeline for you.
Will MacAskill’s first appearance on the show was a crowd pleaser, and I expect this one to be equally or more popular. Among other things, we talk about why apparently commonsense moral commitments may imply you should sit at home motionless, and how to fix this problem. What an altruist should do if they’re risk-averse. Whether we’re living at the most important time in history. And Will’s new views on the likelihood of extinction in the next 100 years.
While we’ve covered some of these issues before, Will has a lot of new and sometimes unexpected opinions to share. And we’ve got chapters in this episode to help you skip to the section you want to listen to.
Before that a handful of quick notices.
First, you might be pleased to know that people who started listening to our previous episode with David Chalmers, our longest release yet, on average finished 59% of it. The whole episode is 4h42m long, and on average people are making it through 2h45m of it. That’s fantastic commitment, and I think vindicates our in-depth approach to interviews.
Second, in my last interview with Will we discussed the book he was writing at the time, called Moral Uncertainty. That book is now coming in late April and you can pre-order on Amazon. We’ll include a link to it in the blog post associated with the show. Of course, keep in mind this is a pretty academic book, so it’s suitable for people who are really into the topic.
Third, as we discuss in the episode Will works at the Global Priorities Institute at Oxford. GPI is currently hiring Predoctoral Research Fellows in Economics. This is a great opportunity for anyone who might do a PhD in economics, or is already doing one.
Unfortunately the closing date for expressions of interest is only 12 hours after I expect this episode to come out — midday UK-time on Friday the 24th of January. We’ll link to that job ad in the show notes in case you might have a shot at getting an application in really quickly.
But more generally, I know many economists or aspiring listen to this show, and the Global Priorities Institute is very interested in getting more economists on its research team, or having more future economists visit in some form. So if you find this conversation interesting, you should go fill out the expression of interest form at globalprioritiesinstitute.org/opportunities/ . It only takes a few minutes to do that, and I’ll know they’ll be interested to hear from you.
You can also, as always, find dozens of other opportunities on our job board at 80000hours.org/jobs
Finally, before we get to the interview, in this episode we bring up a wider range of global problems than we usually have a chance to cover.
For practical reasons we as an organisation have to focus on knowing a serious amount about a few particular problem areas first. We’ve mostly worked on AI risk, biorisk, and growing capacity for people to do good (e.g., through doing global priorities research) because we think these issues are really important and focusing on them is how we can move the needle the most. However, this has meant our research is on a narrower range of things than the full portfolio of challenges we think our readers would ideally work on solving.
So we’re excited when we see people exploring other ways to improve the long-term future, especially ones that they personally have uniquely good opportunities in.
We’ve now got a decent list of some ideas for how people can do that at 80000hours.org/problem-profiles/. If you scroll down to the subheading ‘Potentially pressing issues we haven’t thoroughly investigated’ there’s a few dozen listed with brief analysis, which might help you generate ideas.
Alright, without further ado, here’s Prof Will MacAskill.
The interview begins [0:04:03]
Robert Wiblin: Today, I’m speaking with Will MacAskill. Will will be well known to many people as a co-founder of the effective altruism community. He’s an associate professor of philosophy at Oxford University currently working at the Global Priorities Institute or GPI for short, a research group led by Hilary Greaves who was interviewed back in episode 46. Will has published in philosophy journals such as Mind, Ethics and the journal of Philosophy and he co-founded Giving What We Can, The Centre for Effective Altruism and our very own 80,000 Hours and he remains a trustee on those organizations’ various boards. He’s the author of Doing Good Better, another forthcoming book on moral uncertainty and is in the process of brewing a new book on longtermism. So thanks for attending to the podcast, Will.
Will MacAskill: Thanks for having me on again, Rob.
Robert Wiblin: Yeah. So I hope to get to talking about whether we are in fact living at the most important time in history. But first, what are you working on at the moment and why do you think it’s a good use of your time?
Will MacAskill: Well, the latter question is tough. Currently, I’m splitting my time about threefold. One is between ongoing work with the Centre for Effective Altruism and issues that generally come up as a result of being a well known figure in the EA movement. That’s about a quarter of my time or something, another quarter of my time is spent on the Global Priorities Institute and helping ensure that goes well, helping with hiring, helping with strategy and some academic research. And then the bulk of my time now, which I’m planning to scale up even more, is work on a forthcoming book. Tentatively, the working title is “What We Owe the Future”, which is aiming to be both readable by a general audience, hopefully also something that could be cited academically and really making the case for concern about future generations when combined with the premise that there are so many people in the future, the overwhelming importance of future generations and then exploration of well, what follows if you believe that, and so for that purpose, I’ve been on a speaking tour for the last few weeks, which has been super interesting.
Robert Wiblin: Talking about longtermism?
Will MacAskill: Yeah, I’ve tried to create a presentation that’s the core idea and core argument of the book in order that I can get tons of really granular feedback on how people respond to different ideas and what things I’m comfortable saying on stage. Sometimes I think I might believe something, but then do I believe it enough to tell a room full of people? Sometimes I feel like my mouth starts making the motions, but actually I don’t really believe it in my heart and that’s pretty interesting. So I’ve been quite scientific about it. Everyone in the audience gets a feedback form, they get to score how much they knew about the ideas beforehand, how convincing they found the talk. My key metric is, of people who put four or less for the first question, which is how much they know, what proportion of them put six or seven in terms of convincing us of the talk out of seven? And I’ve given a few different variants of the talks.
Robert Wiblin: Yeah. What fraction of people are you convincing? Roughly?
Will MacAskill: About 50%, yeah. And maybe the data’s quite noisy and so it’s not like actually a scientific enterprise. Maybe I’ve got that up to 60% or something over the course of the period.
Robert Wiblin: In terms of the granular feedback, have you learned kind of any arguments that don’t go down well or some that particularly do?
Will MacAskill: Yeah, I feel like I’ve learned a ton actually, and I’m still processing it. One big thing for sure is that many more people are willing to say future people just don’t matter than I would expect. So in moral philosophy, the idea that when you’re born or when you’re interests are affected is morally irrelevant, that’s just absolutely bread and butter, no one would deny that. And so I think I just would assume that to the wider world, especially if I’m talking at campuses, kind of lefty audiences in general, but no, actually significant, probably the most common objection is–
Robert Wiblin: That future generations don’t matter.
Will MacAskill: Yeah. Or just why should I care? Yeah. And then the second thing that was most interesting is I was expecting a lot of pushback from the environmentalist side of things where I do talk about the importance of climate change. I talk about the fact that species loss is another way of impacting the long run future, but they’re not the focus of the talk. The focus is on other kind of pivotal events that could happen in the coming 50 years. And I was expecting to get more pushback from people who are like, any moment you’re not talking about climate change is attention taking away from the key issue of the time. And certainly some people said that, but I think just the proportion of people on campuses who are, let’s say kind of deep environmentalists is lower than I would’ve thought.
Robert Wiblin: Interesting. Yeah. That’s maybe one of the most common pieces of feedback on the podcast in terms of content or substance, is that we don’t talk enough about climate change or don’t think enough about environmentalism or deep ecology and things like that. So yeah, I’m kind of surprised that maybe that’s not the case. Was it one of the more common kind of substantive critiques that would be–
Will MacAskill: Yeah, it was still because it was very spread the way people would object, that definitely came up. Maybe I think just people were more happy with the fact that I was saying, “Yes, climate change is a super important issue, it’s not the focus of this talk”, and people were actually quite open to other issues. Another little piece of evidence there, Bill McKibben, who’s an academic at Middlebury College I think and has many years been an environmental activist.
Will MacAskill: He’s got a recent book where it’s actually, it’s funny, it’s making almost exactly the same argument that I wanted to make, which I’m very happy about, but that we should be really concerned about genetic engineering of humans and artificial intelligence and he’s saying, “Back 30 years ago when climate change was like just nascent, you could really change the policy landscape. Things hadn’t gotten entrenched or sclerotic, as you love to say, Rob. On the podcast… I’ve never heard the word sclerotic as much as–
Robert Wiblin: Personal favorite, yeah.
Will MacAskill: But now it’s the case that for advances in biosciences and AI, we’re in that situation at the moment. And so it was really great actually seeing someone who’s this, you know, long-time climate activist coming up with the same framing of things that I was thinking.
Robert Wiblin: Are there any arguments that have gone down better than you’ve expected?
Will MacAskill: I talk about the fact that future generations are disenfranchised in the world today. They don’t have a voice, they don’t have a political representation, they can’t trade, bargain with us. That goes down pretty well.
Robert Wiblin: This kind of reminds me of, I guess I’ve seen people try to write books by posting chapter by chapter on blogs or I guess like posting their ideas on Twitter or Facebook and I’m getting feedback that way. So it’s just like an in-person way of doing that.
Will MacAskill: Yeah, exactly. It’s kind of the thing that I most thought of was like the Y-Combinator advice of talk to your users where Anne and I ran a study on Positly, you know, trying to get some general sense of how people react, but the sorts of people I’m wanting to talk to are undergraduate and graduate students on college campuses who’re pretty close to the target demographic and so nothing really substitutes for being able to interact directly with those people and see what things go down well and which things don’t.
Robert Wiblin: Do you worry that this could lead you to kind of pander to the median like undergraduate student and where that’s like maybe not like as intellectually honest or like maybe appealing to you? Maybe you just want to like talk to the people who are most keen on this view rather than like try to get the person who’s like kind of resistant to be okay with it.
Will MacAskill: Oh, great. Well it’s interesting… I thought you’re going to go a different way, which is there’s definitely a tension I’m feeling, which is between the pull towards the general audience and the pull towards kind of academic seriousness. And so I’m trying to get those at GPI somewhat invested in the book too so that I have them pulling me on one side and then probably the publishers and so on are pulling me on the other side. Then there’s a question of well are you trying to make it just okay for everybody versus some people who really love it? I think I’m aware of that. So my key metric was proportion of people who love it.
Will MacAskill: I would have actually gone for the p-metric of just proportion of people who give it seven but that was just too noisy. So six or seven and it turns out people at Cambridge just don’t give extreme scores. I didn’t get a single score that was above six or below four.
Robert Wiblin: Interesting.
Will MacAskill: Yeah, something like that. From like a hundred people.
Robert Wiblin: Is that something with the grading? Cause I know at Cambridge, right, they don’t give like or it’s very rare to get a score kind of above 85% or something.
Will MacAskill: Maybe–
Robert Wiblin: Or just, “We just don’t use sevens–”
Will MacAskill: I just thought maybe it’s British people versus Americans who’re like more likely to–
Robert Wiblin: So you did it in America as well?
Will MacAskill: Oh yeah, I did Stanford, Berkeley, Harvard, MIT, Yale, NYU, Princeton.
Robert Wiblin: Any big differences other than the scoring?
Will MacAskill: Yeah, I think more driven by the nature of the local groups rather than by the universities.
Robert Wiblin: I was thinking maybe Americans are systematically different than Brits.
Will MacAskill: Oh, okay. Yeah, I mean obviously they are. Brits, especially on college campuses where Brits are more academic on average, obviously you get super-academic people in US universities, but you know, Oxford and Cambridge select only on the basis of academic potential, whereas there’s more of a sense in the US of wanting well-rounded individuals. Don’t get them in Britain!
Will MacAskill: There’s legacy students, there’s athletes… there’s people who are just future politicians. And that varies by school. But then on the other hand, people in the US are more entrepreneurial. They’re more kind of go get it and I think the cultural difference is quite notable actually.
Robert Wiblin: So it seems like your life has changed a fair bit over the last few years. You used to kind of be like more in doing organizational work and now you’re like more on the academic track.
Will MacAskill: That’s right, yeah. I think it’s like there’s been two waves which I’ve vacillated between, where I obviously started very much in the academic track, then there was a period of several years of setting up, Giving What We Can , 80,000 Hours, the Centre for Effective Altruism, where I was clearly much more on the, you know, setting up these nonprofits operational side, then writing Doing Good Better and finishing up my PhD. That was a long chunk when I was back in research mode. And then at that point I was moved back to then working closely with 80,000 Hours and then running the Centre for Effective Altruism. And now I finally, the outside view doesn’t believe this, but I’ve finally settled.
Will MacAskill: Yeah. I mean I think I extended… The period when I was a graduate student, I think it made sense to be kind of riding two horses at once. I think I probably should have focused earlier than I did, whereas now I’m feeling much more confident that I’m kind of making my bets and that makes sense. EA is big now. It’s going to last a long time. We no longer need like people trying to spin plates all over the place. We need more people who are willing to commit for longer time periods and that applies to me, as much as it does to tell anyone else.
Robert Wiblin: Yeah, are you enjoying this phase more? You seem kind of happier these days.
Will MacAskill: Yeah, I’m enjoying it a lot more actually. You know, that’s not a coincidence. There’s a whole set of things that are correlated with each other where the causation goes both ways of what I enjoy most, what I think I’m best at, where I think I’m having the most impact. And it’s definitely, yeah, clear to me that if I think about running an organization and so on, like I can do it. I don’t think I’m like 99th percentile good at this thing. Whereas I think the thing that I’m best at and then end up enjoying most and then also think that I get the most impact out of is this kind of in between academia and the wider world. Taking ideas, not necessarily being the, again, the 99th percentile kind of academic, not being Derek Parfit or something, but instead being able to crystallize those ideas, get to the core of them and then transmit them more widely.
The paralysis argument [0:15:42]
Robert Wiblin: Let’s turn now to this paper that you’ve been working on, which I really enjoyed because it’s got kind of a very cheeky angle that’s kind of maybe cornering people who I am inclined to disagree with already anyway. It’s called the paralysis argument and you’ve been writing it with your colleague Andreas Morgensen. It has this pretty fun conclusion… how would you describe it?
Will MacAskill: Okay, well I’ll start off by, it’s not even a thought experiment. Just a question. So I know you don’t drive, but suppose you have a car. Suppose that you have a day off and you’re not sure whether to stay home and watch Netflix or to do some shopping and you think, well suppose you go shopping and you drive to various different places in London and you know, buy various things and come back, over the course of the day. And my question is, “How many people did you kill in the course of doing that”?
Robert Wiblin: Well, I know the answer, but I guess the naive answer is no one.
Will MacAskill: The naive answer’s nobody. And I’m not talking about, well, you could have spent your time making money that you could have donated. I’m not talking about your carbon emissions. Instead what I’m talking about is the fact that over the course of that day, you have impacted traffic. You’ve slightly changed the schedules of thousands, made probably tens of thousands of people and well on average over the course of someone’s life, which is I think is 70,000 days, a person will have about a child. So if you’ve impacted 70,000 days, then statistically speaking, you’ve probably influenced the exact timing of a conception event. And what does that mean? Well, that actually means you’ve almost certainly changed who got born in that conception event. In a typical ejaculation there are 200 million sperm.
Robert Wiblin: Things you learn on this show!
Will MacAskill: I know, it’s a little factoid that everyone can–
Robert Wiblin: Dine out on?
Will MacAskill: Yeah, exactly. So if two people who are having sex and going to have a child, if the timing of that event changes ever so slightly, even by 0.1 of a second, almost certainly it’s going to be a different sperm that fertilizes the egg. A different child is born, but now you’ve got a different child that’s born, they’re going to impact all sorts of stuff, including loads of other reproductive events and so that impact will filter out over time, and at some point it’s hard to assess exactly when, but at some point, let’s say it’s a hundred years time, basically everybody’s a different person. But if you’re having such a massive impact over the course of the future by driving to the shops, well one thing that you’re going to have done is changed when very many people die. So even just looking at automobile accidents, I think 1-2% of people in the world die in car crashes. And it’s obviously very contingent when someone dies. So over the course of this next hundred years, when all these identities of people are becoming different as a result of your action to drive to the shops, that means that for loads of people who would have existed either way, they will die young, they will die in a car crash that they wouldn’t have otherwise died in.
Will MacAskill: And now in expectation, exactly the same number of people will have been saved from car crashes and will die later than they would’ve otherwise done. And that’s where the distinction between consequentialism and nonconsequentialism comes in. So from a consequentialist perspective, if you’ve caused early deaths of a million people and extended the lives by just the same amount of a million other people, well that’s just the same, it washes out. So the consequentialist doesn’t find anything troubling here, but the nonconsequentialist endorses the following two claims. And I’m not saying all nonconsequentialists do, but this is a paradigm nonconsequentialist view endorses an acts/omissions distinction such that it’s worse to cause harm than it is to allow harm to occur, and an asymmetry between benefits and harms where it’s more wrong to cause a certain amount of harm than it is right or good to cause a certain amount of benefit.
Will MacAskill: And if you have those two claims, then you’ve got to conclude that these perhaps millions of people, certainly hundreds of thousands of people who have died young as a result of what you’ve done. The fact that you’ve caused those deaths is worse than the corresponding amount of benefit from the fact that you’ve saved the same number of lives. And so if you want to avoid causing huge amounts of harm that are not offset by the corresponding benefit, then you ought in every instance where you might affect the other reproductive events. You ought to just do nothing. Do whatever is the omission. And in the paper we don’t take a stand on what counts as omitting. It could be the Jain practice of sallekhana where you sit motionless until you slowly starve to death, where the Jains were defending that view as the best way to live on the grounds of do no harm. It could also be just you act on every impulse, you go with the flow, but whatever this nonconsequentialist view decides is an omission.
Robert Wiblin: It doesn’t sound like a normal life?
Will MacAskill: It’s not going to be a normal life. Yeah. You’re going to be extremely restricted in what you can do.
Robert Wiblin: Yeah. Okay. So the basic reason that you’re getting this result is that we’re drawing an asymmetry, a distinction between creating benefits and causing harms. And so inasmuch as you’re creating like lots of benefits and lots of harms, it just seems like everything’s going to be forbidden or like, yeah, I guess everything except… Unless you can find some neutral thing that you can do that doesn’t cause benefits or harms it in the relevant sense.
Will MacAskill: Yeah, exactly. And so yeah, for clarification, this audience who probably is very familiar with utilitarian moral reasoning, consequentialist moral reasoning, it’s a kind of wider project at the Global Priorities Institute of thinking, well how does longtermism look if you’re taking alternative moral views? And so this is kind of one example of, well, if we’re reasoning seriously about the future and we’re nonconsequentialist in this paradigmatic way, what follows?
Robert Wiblin: Yeah, so that’s kind of intuitive, but I guess I found the paper as I was trying to read it a little bit confusing. It’s very philosophical.
Will MacAskill: It’s kind of dense. It’s definitely a philosopher’s paper.
Robert Wiblin: Yeah. I guess there’s also this distinction between, so there’s this distinction between harming people and benefiting them, and then there’s also this thing between harming people and allowing people to be harmed, which seems to be relevant. Do you want explain why you end up talking quite a bit about that?
Will MacAskill: So the key question and if I was going to guess at what is the most promising strand for nonconsequentialist to try and respond to this is to say, “Well, yes, we think that in general there’s a distinction between actions and omissions”. So for example, most people intuitively would say that if I saw you Rob, drowning in a shallow pond and then walked on by, that would be very wrong. But it wouldn’t be as wrong as if I strangled you right now. That’s quite intuitive that there’s a difference there.
Will MacAskill: Yeah. But there’s a question of, well, okay, what if I kill you via driving to the shops and then causing different reproductive events that then have this kind of long causal chain that results in your death? Is that still an action or is it an omission? There’s definitely a sense in which it’s an act. Intuitively it kind of seems like an action. Like I’ve moved. I did this like positive thing of driving to the shops. I didn’t know it was going to kill you. That’s a distinction. But, it still seems like a positive action. But perhaps the nonconsequentialist can come up with some way of carving that distinction such that for these kind of very long run causally complex effects, they just count as omissions or something.
Robert Wiblin: Okay. So they would try to get out of it by saying, “Oh, you’re not actually harming them. You’re merely allowing them to be harmed because what you’re doing wasn’t an action in the relevant sense of like actively causing someone harm.
Will MacAskill: That’s right.
Robert Wiblin: And so that’s going to get us out of this thing that like harming people is not merely bad, but prohibited.
Will MacAskill: That’s right, but then the question is, well, can you have an account of acts and omissions that satisfies that. That gives us that answer and that’s where it starts to get very in the weeds and more technical because the existing accounts of acts and omissions, it gets quite complicated. There is one account that is kind of independently very influential, which is Bennett’s account. On this account, supposing I make something happen or I caused some event to happen. That is an action if the way you would explain that happening involves some bodily movement of mine that is a very small part of the overall space of all bodily movements I could have taken.
Robert Wiblin: Very intuitive; I think that’s what everyone meant all along.
Will MacAskill: I mean I think it’s actually kind of good to say.
Robert Wiblin: I guess at first blush it sounds–
Will MacAskill: First part is really complicated. I always forget exactly how to state it too.
Robert Wiblin: So, okay. We started out with this intuitive thing that if your actions cause harm it’s worse than if your actions cause benefit and indeed like actively harming people through your actions is probably prohibited. And then we’ve ended up with this kind of absurd conclusion that like any actions that you take are probably forbidden ethically. I guess one has to suspect that something’s gone wrong here. Right? Because it’s so counterintuitive. So I suppose, yeah, as much as I’d love to skewer deontologists and find ways that their views are incoherent, you’d have to hope that there’s probably some solution here. There’s some way that they could patch the view that saves them. Do you want to discuss the various different attempts that one could make?
Will MacAskill: Yeah, I mean, it’s not totally obvious to me. Like I do treat it as a reductio. So if I was a nonconsequentialist I’d want to give up one of my starting premises rather than endorse that conclusion. It’s not totally obvious. Like it seems to follow from my perspective quite naturally from the underlying intuitions that are under-girding this style of nonconsequentialism, which is, well, it’s worse to harm than to benefit. And we happen to be in this world, which is so incredibly complicated that your actions inflict huge harms. But I agree and you know, from the feedback we’ve gotten, nonconsequentialists, well actually there was one, a journal we got kind of to the last stage and it was a vote among the editors and they all decided they didn’t like the paper, but for different reasons. But one of whom was like, why is this a reductio? She just endorsed the conclusion.
Robert Wiblin: A Jain, perhaps.
Will MacAskill: People vary a lot.
Robert Wiblin: So someone who’s sympathetic to consequentialism just looks at this and says, “Oh, this just demonstrates the problem with the asymmetry between harm and benefit”. So to a consequentialist who doesn’t find the appeal of that, it’s just very easy to be like, “Well, I just never thought there was an asymmetry to begin with and that’s no problem now.
Will MacAskill: Yeah, exactly. That’s what I think the rational thing to do is. I think it’s like a way of demonstrating that we shouldn’t have had that asymmetry. But then that’s really important because even if you might think, well, I’m worried about consequentialism in other contexts or something, it means that when it comes to thinking about the long run future, we can’t have a harm-benefit asymmetry. And that’s important. You know, consider a carbon tax or something. Yeah. What level of carbon emissions should we try to get to? Well, the economist says, “Well, there’s some social optimum at which if we were to tax carbon beyond that, then the harm to ourselves would outweigh the harms to others”, or in fact the benefits in burning coal. But if you’ve got this harm-benefit asymmetry, you need to go further than that because I’m just benefiting myself by burning fossil fuels, but I’m harming someone else. And if I’ve got this harm-benefit is limited to, I need to get the amount of carbon we emit as a society, not just down to some low level that would be guaranteed by a significant carbon tax, but actually down to zero. So it really does make a difference, I think, for how we think about the long run.
Robert Wiblin: Yeah, interesting. Though people who think that it’s not okay to harm people usually do have like all kinds of exceptions. So if like I say something mean to you that is true, but I think it’s actually good. Well, it’s good for the world that you know this like bad thing that you’ve done, but it’s going to make you feel sad. Most people don’t think that that’s impermissible. Most people don’t think it’s like impermissible to drive just cause they’re polluting. I mean maybe it’s more questionable the second case.
Will MacAskill: Yeah, you at least need to have some sort of explanation of why. Yeah, why it’s not wrong to harm where perhaps that there’s some implicit social contract that we can all drive and everyone benefits. Perhaps you’re assuming the other person consents. If you’re a surgeon–
Robert Wiblin: Yeah, but in that case you’re not harming them overall. If we really thought that you could just like never take actions that left other worse off, but that’s maybe, where we’re going here. But even just on like on a normal level, you’d be like, “Well, you can’t give your like colleague negative feedback. It’s like, Hmm, you can’t end a relationship that you really hate”. Just like all kinds of actions would obviously just be prohibited even though they’re kind of good for the world.
Will MacAskill: Yeah, I mean, I think in those cases there’s a couple of things to say. One is almost all nonconsequentialists would say it’s still about weighing the benefits and harms. So if the benefits are great enough, then it’s okay to inflict some harms, especially if the harms are small. The second would be that, yeah, not all harms count. Perhaps just negative feedback or the harm of, you know, having your heart broken is just not the sort of harm that counts, morally speaking. For this argument, you know, you’re killing people. That’s certainly the sort of harm that counts, morally speaking. And then the third thing I think would just be, I think there might be lots of things that one implicitly signs up for. So if you take a job, I think you know, you’re implicitly agreeing to get negative feedback if you’re not doing well enough. If you enter a relationship, as part of that, you’re understanding that you may be broken up with. And that of course would be okay because consent can make harms permissible.
Robert Wiblin: Okay. Interesting. Although I guess in the pollution case, you might think, “Well you just can’t drive cars cause it’s causing harm to total strangers far away who’ve never consented”.
Will MacAskill: I mean, yeah, I’m actually–
Robert Wiblin: I guess that doesn’t sound so crazy to me to be honest.
Will MacAskill: Yeah. It doesn’t sound so crazy. I mean, perhaps they say the harms are small enough, perhaps if the pollution is just within your own country, then there’s a kind of implicit social contract.
Robert Wiblin: Or what about the person who voted against the contract?
Will MacAskill: It gets complicated.
Robert Wiblin: Yeah, this is all a bit of a big diversion. So what kinds of methods would someone who wanted to say, “Well, I do believe in the harm-benefit asymmetry, but nonetheless I don’t want to buy this paralysis thing”. How can they find some way of escaping the conclusion of paralysis?
Will MacAskill: Great. I think there’s some ways that don’t work and some ways to do so. One thing you might be inclined to say. I’ll start with the ones that don’t work. One thing you might be inclined to say is that, well, when the consequence goes via the act of another, then it doesn’t count. So you know, in these cases where I do some action, it will be via other people’s actions that then the harm ultimately is committed.
Robert Wiblin: So it’s mediated now by someone else’s choices.
Will MacAskill: Yeah, exactly. You might think, “Oh, that absolves me of any kind of responsibility”. But the thing is, just intuitively imagine if you’re selling arms to some dictatorial regime. That’s not, you know, intuitively you know that though that dictatorial regime is going to use it to kill minorities in the country. It doesn’t seem like the fact that the harm is mediated by the dictator and their armies, it doesn’t seem to absolve you of guilt of selling the arms to that dictator. So I think that kind of initial response just doesn’t generalize. It doesn’t seem like actually this is something that we would want to endorse.
Robert Wiblin: Yeah. Do you think in the cases where we actually dive down and think about concrete cases where you take an action and very foreseeably it’s going to cause or allow someone else to do a lot of harm, that in general we would reject that as absolving you?
Will MacAskill: I think so. Yeah. I think so.
Robert Wiblin: So what else might someone try?
Will MacAskill: Well, one thing someone might try is just when the consequences are sufficiently causally distant from you, then they don’t matter.
Robert Wiblin: Causally distant in what sense? Like lots of steps?
Will MacAskill: I mean there could be various ways you could unpack it.
Robert Wiblin: Lots of different actors in the meantime.
Will MacAskill: Yeah, perhaps various steps. But again, I think this just isn’t intuitive. Imagine if you see someone who has built this incredible, Rube Goldberg machine was the word I’m looking for. So someone has built this incredibly complex machine. It has all sorts of levers. In fact, it’s just this box that you don’t even know how complex it is inside. And it could it be indefinitely complex. But the input is that someone pushes a button, the output is that someone else dies. Well then it just seems completely irrelevant how causally complex the interior workings are. It also seems irrelevant, like if it’s 20 years delayed or something. If someone knows that that person’s going to die, then well that’s clearly wrong.
Robert Wiblin: Yeah or like if you leave a bomb somewhere that’s not going to go off for a hundred years but you know there’ll be someone in the house and it’ll blow up the house and people will die. The time doesn’t matter. And if you went with the box thing and you’re like, “Oh, but I feel bad pressing the button and then someone dies” and they’d say “Well, I’ll just add more levers and even more causal steps” to make you feel better now.
Will MacAskill: Yeah, it’s really unconvincing.
Robert Wiblin: What about, I guess foreseeability is that another thing that people might say?
Will MacAskill: Well, so the issue is that at least once I’ve told you this argument, the harms are foreseeable now. They’re not foreseeable for any particular person. But again, that doesn’t seem again, you know, imagine I’ve left a bomb in the forest. There’s no particular person that I think has been made worse off but it’s foreseeably going to harm somebody. Well that seems wrong. Another intuitive case, we call it the dice of Fortuna, where you know this Goddess Fortuna, gives you this box, with a set of dice in it. And if you roll the dice and it’s above the average value of the dice, then someone’s life is saved. If it’s below the average value of the dice then someone is killed. And by shaking this box you get a dollar. Ought you to do it? And the consequentialist will be like, “Sure! A free dollar!” But the nonconsequentialist I think should say no. And I think the nonconsequentialist would also find it intuitive that you should say no.
Robert Wiblin: To be honest, as a consequentialist leaning person person, I also find it a bit horrifying, but I don’t feel like I would.
Will MacAskill: Okay. You don’t feel like… Interesting.
Robert Wiblin: Well, I mean I guess even just a little bit of moral uncertainty. Cause the dollar is such a small amount which is going to trump it. So yeah.
Will MacAskill: But that’s very close now to the situation that we’re actually in. Because again, you don’t know who these people–
Robert Wiblin: You’ve only got a tiny benefit and then like hundreds died and hundreds lived.
Will MacAskill: Exactly. Which seems very similar to this dice of Fortuna case.
Robert Wiblin: Yeah. So I guess, why does it then seem so counterintuitive to us. Is there something about like, so we’ve thrown together both like the long causal chain and the intermediation by others and the unforeseeability of it and like maybe all of these things together are just weakening the intuition that these actions are wrong.
Robert Wiblin: I’m curious to know, do many people encounter this and say, “I actually find the paralysis thing kind of intuitive”, cause I guess there’s some sense in which it’s intuitive that, well, your actions are harming all of these people. I’m like, so maybe you just can’t do all these things that you thought were totally normal.
Will MacAskill: Well, yeah. One of the reviewers’ comments. Some people, one person–
Robert Wiblin: You have an existence proof.
Will MacAskill: One out of eight of the editorial board have thought that. But yeah, unfortunately I think the reviewers we’ve had have not really engaged with the thought experiment to like, do they say, “No, there is some difference between the dice of Fortuna case and the case of driving to get some milk”, or do they say in both cases actually it’s permissible. And I really think that they’ll have to say in both cases it’s permissible.
Robert Wiblin: Is there any difference if it’s like… if you shake the Fortuna thing then you can go to work, then you can do your normal day to day business. If you say it’s not, you’re getting a dollar, but it’s like now you’re permitted to go and actually like live a normal human life, then maybe it seems more permissible and like, and whenever you stop shaking the box like once, now you just have to like stop moving. That doesn’t sound so good. But then it’s like now we’ve made it exactly the same as the other case.
Will MacAskill: Yeah. If it’s extreme enough. But, it’s not exactly the same cause it’s not one person. It’s hundreds of thousands of people every time. Perhaps you might think, well there are some things that morality can’t require of you. So you might have the view, again as a nonconsequentialist, that morality just can never require you to sacrifice your life no matter how great the stakes. Okay, fine. But we were just talking about you doing the groceries. If you’re now at the stage where you’re going to starve to death if you don’t do the groceries, okay. At that point it’s permissible. But the vast majority of our actions are, you know, you’re driving to the cinema or something.
Robert Wiblin: So the Jains just sit there until they die. I suppose another one would be to like try to totally causally cut yourself off from any other humans. So like go and live as a hermit in the forest or go to Siberia and then like try to make sure you have no interactions with other humans.
Will MacAskill: So you could try and do that, but the course of doing that, you know, buying all the canned goods would itself involve huge amounts of harm.
Robert Wiblin: This is the least bad perhaps. Cause it’s like once you managed to get far enough away, then yeah. Anyway. Okay. So I think there’s at least one more attempted solution here, which is maybe the one that I found the most intuitively appealing, which is that it’s Pareto acceptable to everyone ahead of time that you go to the shops because it’s like, if you could survey everyone on Earth, and said, “Well, do you personally prefer that the person doesn’t go to the shops because it’s like unforeseeable who’s going to be benefited and who’s going to be harmed”? They would consent to it if you could ask them or it’s like they don’t see themselves as being worse off because of it. Do you wanna explain this one a bit more?
Will MacAskill: Yes. So I thought you explained it quite well. So in economics and philosophy, a Pareto improvement is where some people are made better off and no one is made worse off. An ex ante Pareto improvement is where there’s going to be some gamble — So perhaps everyone enters a lottery where it costs $1 but you’ve got a 50-50 chance of getting $10. Now 10 people enter that lottery. It’s ex ante a Pareto improvement cause everyone’s prefers that lottery to no lottery. But it does make some people worse off and some people better off ex-post. And, when I drive to the shops, absolutely everybody I affect, if I could ask them, “Do you care if I drive to the shops or not?” before they know what’s going to happen, they would say “No, I’m indifferent because it’s as likely that you will bring forward my death as that you will postpone my death”.
Will MacAskill: So indifferent for most people. And then you get a small benefit. You get the benefit of whatever you bought at the shops. So this is better for some people and worse for none. And I think this could be a way out. But it is itself going back on, it takes you quite a long step towards consequentialism is the key thing.
Robert Wiblin: Because now we’ve said if you cause harm, but it wasn’t foreseeable that like a specific person ahead of time was going to be harmed and would have like told you not to do it, then it’s okay.
Will MacAskill: Yeah. Well let’s just consider some thought experiments, so suppose that the government is deciding, it’s like, “Well, there’s just this organ shortage. People are dying because they don’t have kidneys. So what we’re going to do is hold a lottery and people will be selected at random, they’ll be killed, their organs will be transplanted to save the lives of five other people”. That’s better from everyone’s perspective. It extends everyone’s life.
Robert Wiblin: I’m with you!
Will MacAskill: Rob’s signed up already. But intuitively, from a nonconsequentialist perspective, intuitively that’s wrong. It’s impermissible. And in fact, going a little bit deeper, There’s a theorem by John Harsanyi, his aggregation theorem, which is that if everyone’s wellbeing is structured in a way that satisfies some pretty uncontroversial axioms and the way you aggregate satisfies ex ante Pareto, then you end up with utilitarianism. And so if you endorse ex ante Pareto as a nonconsequentialist, you’re not going to go all the way to consequentialism or utilitarianism, but you’re going to get much, much closer I think. And so yes, this is a way out, but it is undermining a significant other commitment that the kind of paradigm nonconsequentialist won’t want to hold on to.
Robert Wiblin: Yeah, so this just reminded me, I think a couple of years ago we were laughing at economists because economists are obsessed, at least in some strands of thought with Pareto improvements and things aren’t good unless they’re Pareto improvements. So everyone has to be either indifferent or better off from some policy change or some piece of behaviour.
Robert Wiblin: But this is actually really stupid. It sounds good, maybe at first blush, but in fact it just means that basically no policy change is acceptable because like every policy change, whether it’s like raising interest rates, decreasing interest rates, keeping interest rates the same. Now raising taxes, lowering them, there’s going to be like some person who’s going to be worse off and would say that it’s no good. So basically it means that like every policy change you could have and every indeed keeping the policy the same is impermissible. And indeed even going out and buying things is no good cause it’d be like the other people who would have wanted to buy the thing and like you drove up the price, they are worse off predictably ahead of time ex-ante. So it’s like there’s no economic actions you can take that actually, or very few that would actually be Pareto improvements. There’s this funny obsession with that and then totally forgetting about it in the actual application.
Will MacAskill: Yeah, absolutely. I never really understood the economists’ fixation with this. I mean it’s definitely, if you’ve got some distribution of wellbeing, then if there’s any Pareto improvement, take it. That’s always great. But it just almost never applies. But it is one of these things where it means you don’t need to make comparisons of wellbeing between people. ‘Cause if it’s good for some people and not worse for anyone else then you know that it’s a good thing to do.
Robert Wiblin: An extra funny thing here is that typically it seemed like they were concerned with ex ante Pareto improvement. So you’d have to think ahead of time, is this like someone who’s foreseeably worse off? But it seems like if you’re not willing to make interpersonal comparisons of utility, what actually matters is ex post Pareto improvements. So if anyone, even like if everyone expects to be better, but then like one person after the fact happens to be worse off. If you can’t do interpersonal comparisons of welfare and like welfare was what you cared about, then that could just invalidate the whole thing. So each person expects to have their welfare go up. But then one person… So we’re playing some kind of lottery here where it’s like either you’re like welfare goes up 10 or it goes down with like 90% probability or it goes down one with like 10% probability. So everyone’s like in on this gamble ahead of time because it raises their expected utility. But then if one person’s welfare does go down one and we’re not willing to say whether that one for that person is like more or less than you know, the hundred points that was gained by other people because we’re just not willing to compare between people. Then it seems like we just can’t say whether it was an improvement or a worsening.
Will MacAskill: Yeah. So it is the case that we know that either way it’s not going to be better. In fact, it’s neither better nor worse because we can’t make the comparison. It’s kind of incomparable. So it’s kind of strange where you’re saying, “Yes, we should do action A over action B, even though I know that whatever happens, for certain, action A is not going to be better than action B and it’s not going to be worse, nor is it going to be equally as good. So yeah, really actually it does seem like they’re blurring together the argument for ex post Pareto where I do have an ordering, I can say that some outcome is better than some other outcome with a different view which might be something like presumed consent where if it’s ex ante Pareto improvement, you can appeal to the fact that well everyone would want it. And then you’ve got some extra principle where you say, if everyone would want this thing, then if I could ask them, then it’s okay. That’s actually quite a different sort of justification from a purely welfarist one.
Robert Wiblin: Okay. Let’s come back to the paralysis argument. That was a big diversion. Okay, so you think the Pareto argument is the most promising. What are the biggest weaknesses you were saying?
Will MacAskill: Well the biggest weaknesses are that you just had to give up on other parts of nonconsequentialist commitment, like saying that, well in general, just like policies that can be Pareto improvements that involve killing one person to save five and so on. They involve you doing all sorts of horrible things.
Robert Wiblin: And that stuff might be okay.
Will MacAskill: Yeah, that stuff would be okay too. So it’d be a big move towards kind of more utilitarian consequentialist thought.
Robert Wiblin: So it seems like another angle that you thought people might make would be to try to play with the action/omission distinction here, to claim that like actually doing things, in fact, these are like not actions. Is that right?
Will MacAskill: Yeah. So you could try and develop an account of the acts/omission distinction, where there are many different accounts such that all of these different long run consequences you have are omissions rather than actions.
Will MacAskill: And one account that would make there be parity between me sitting motionless and me going to the shops is Jonathan Bennett’s account. The conclusion actually is that all of these consequences are actions. So rather than there being some omission that I can do such as sitting at home not doing anything such that that would, you know, not actively kill all these people in the future. Instead it would be saying, well that itself is an action.
Robert Wiblin: Well that seems right. Yeah. It’s tempting to say, well just staying at home, that’s an action too that like has benefited and harmed people. And the mere fact that you were like still doesn’t really get you off the hook. So in fact like everything is prohibited. There’s no one privileged thing that’s an inaction and so like maybe it all just cancels out.
Will MacAskill: That’s right, so that’s the kind of Bennett-esque view where the idea is just something’s an action if out of the space of all possible ways in which you could have moved, it’s quite a small portion of the space.
Robert Wiblin: It’s kind of specific.
Will MacAskill: It’s very specific. At least when you’re explaining whether some event happened.
Robert Wiblin: Isn’t any action going to be also like a very narrow range out of all of the things that you could do, like including sitting still?
Will MacAskill: Well that’s why you need this explanation of an event.
Robert Wiblin: The simplest way of explaining the action is one that says you did this specific thing rather than you didn’t do some other things.
Will MacAskill: Yeah. The simplest way of explaining why the event happened. So if you’re in the shallow pond and you’re drowning, what are all the things I could do that would still result in you drowning? Well I could dance, I could give you the finger. I could just walk away. There’s tons of actions that would still result in that consequence. Whereas if it’s strangling you, there’s like a very narrow range of actions that would result in that consequence.
Robert Wiblin: Suddenly getting up and dancing would stop the strangling or any other action would prevent me from being strangled.
Will MacAskill: Yeah, and so on this view, it is the case that now all your actions are causing huge amounts of harm, so you’re actually in this kind of moral dilemma, everything you do is in some sense very wrong. But perhaps the nonconsequentialist can say, “Well, but that still doesn’t mean that you should engage in paralysis. Perhaps you should just do the best thing”. But because they’ve still got the same ranking between actions, even if all of those actions are inflicting huge amounts of harm. I mean, I do think the more natural thing to say would be just like everything’s wrong then, you’re in a moral dilemma.
Robert Wiblin: Would any of them say that everything’s prohibited, but some things still have better consequences. But then I suppose that they’re just back at consequentialism.
Will MacAskill: Well, you could be a nonconsequentialist and just deny the possibility of moral dilemmas. So you could just say, “It’s never the case that all your actions are wrong. There has to be at least one action that’s the best you could do in the situation you were given”. So, you know, it’s like Sophie’s Choice. You can either kill one child or both children.
Robert Wiblin: So you’re saying there’s degrees of prohibition, perhaps, and then like some things will be less prohibited than others and so that’s the thing that you should do.
Will MacAskill: Yeah, or at least in any situation, if you do the least wrong thing that’s permissible.
Robert Wiblin: I see, okay.
Will MacAskill: So that could be a way out.
Robert Wiblin: Although then that might leave you with like a very narrow range of things that are permissible if you’re like, well, only the thing that’s least prohibited is permissible. It’s removing this nice appealing aspect of deontology in the first place. It gives you like a greater freedom of action that you’re not obliged to do one single thing.
Will MacAskill: Yeah. Perhaps you have to be an altruist or something. But then the second thing is that this account, Bennett’s account, is normally criticized precisely because it makes inaction or being motionless too much like an action.
Robert Wiblin: Yeah, you give this nice example where it loses its appeal.
Will MacAskill: Yeah, exactly. So imagine someone is just lying on a bed and daydreaming disinterestedly and if just a little bit of dust kind of falls in this electrical circuit, it will set off some gadget that will kill somebody. But they just keep lying there. On Bennett’s account it would say, “Oh, that person, he killed the person who was killed by the gadget”.
Robert Wiblin: Because the lying there was such a narrow range.
Will MacAskill: Yeah, because if they’d done any action, it would’ve change the wind, the air currents, and it would have resulted in the dust not landing on the electrical circuit and the person wouldn’t have been killed. But most people intuitively think, “Oh no, that is inaction, that is doing an omission. And certainly intuitively, as well it seems, you know, if I’m just staying home all day, there is something intuitive there where that’s, I’m acting less than if I’m like going out into the world and making all these changes.
Robert Wiblin: Yeah, I gotta say in that case, I do feel like the person who’s just like lying still and allowing the dust to fall is like equally culpable as if they’d killed them. But maybe I’m just like the kind of person who is inclined to say that and that’s not how most people would react.
Will MacAskill: Yeah. I think it depends a bunch on whether like are they straining to… Are they just doing it cause they’re not thinking about it very much or are they really trying. So like intention, whether what someone’s intending to do, I think, affects what our judgements are in these cases too.
Robert Wiblin: So I remember on the episode with Ofir Reich we were both saying that we didn’t really see any intuitive appeal about the act/omission distinction. We weren’t really sure that there was a meaningful distinction there. This whole cottage industry of trying to make sense of act and omission. How can something not be an act? It’s so weird. Like even sitting still, isn’t that an act? So yeah, when you have like so many papers trying to rescue this concept, you do have to wonder like maybe this concept actually doesn’t make any sense.
Will MacAskill: Well, I mean it’s an argument that’s been made which is just, well try and analyse this if you can then at best you get something that’s like this really kludgy looking, really complex principle in a way that maybe you might be skeptical that it’s a fundamental principle of morality.
Robert Wiblin: So, we’ve got this idea of acting to harm people is bad and then we’re going to have to create this big structure of what is an action, which is I guess explaining it is some very narrow set of all of the actions that you could have taken, all of the movements that you could have done. Does that carry the intuition for many people? That that is what they meant by an action and they really do think that whether something was like a narrow part of all of your space of options or not was the key to whether something was like a harm?
Will MacAskill: Yeah, so I’m worried this is incorrect, but I think it’s the case that Bennett who creates this distinction and I think it’s reasonably good as an analysis, then has the view, “Well if it’s this, then obviously that’s not morally important”.
Robert Wiblin: Okay. Oh, right.
Will MacAskill: So actually does have the conclusion like, “Oh, now we’ve analysed it, we see that this just doesn’t make much sense. There’s other things that are important. Like, you know, whether you intended to kill someone that’s important for punishment and so on because I want to punish people who intend to kill others, but if it was an accident and whether you intend to kill someone, well good evidence for that is did you take a particular course of action that is a very narrow set of actions in the space of all possible behaviors you could have engaged in.
Robert Wiblin: It seems like we should be able to contrive an example where it’s like half of all of your actions would cause someone to die and so it’s not that narrow a set. And so in that case you’d say, well that wasn’t an action even though it’s like something that’s very foreseeable and you should just like not allow it to happen.
Will MacAskill: Well there’s a famous case of an uncle who wants to kill their baby nephew because they’ll get an inheritance by doing so. And two variants of the case. In the first case, the uncle comes in and drowns the baby. The second case, the uncle comes in and sees that the child has already actually slipped and is drowning and just waits over the child with their hand, ready in case the child like stops drowning. But then doesn’t need to in fact, the child drowns. And most people tend to think intuitively there’s just no difference there. And that’s another way of putting pressure on the idea that maybe the acts/omissions distinction isn’t actually the important thing here.
Robert Wiblin: Yeah. Interesting.
Will MacAskill: I think there’s one final way out for the nonconsequentialist, which is that if your actions are doing enough good, where that might well be the case, if you’re aiming to benefit the very long run future, then plausibly that is permissible. So it might be that your options boil down to sitting at home or doing as little as possible or instead, going and trying to make the long-run future go as well as possible.
Robert Wiblin: Because then you’re doing so much good as to offset the prohibition.
Will MacAskill: That’s right. So you know, on the one side of the ledger, now I’m not driving just to get some milk, I’m driving to do some important altruistic thing. So the negative is that you’ve killed hundreds of thousands of people. The benefits, you also saved hundreds of thousands of people. And it’s also, you’ve not intended to kill those people. So it’s not a classic case of harm. Like, you know, literally killing one person to save five others or you know, murdering someone you don’t like. And so there’s all the offsetting people that you’ve saved and also potentially this astronomical amount of value or an astronomical amount of good that you’re doing by engaging in longtermist activities.
Robert Wiblin: How very convenient. It’s almost as if you were trying to aim to convince people of this all along.
Will MacAskill: Rob, I don’t know what you’re talking about… I actually think that probably the nonconsequentialist should either just take it as a challenge where they need to alter their account of acts and omissions or perhaps be willing to go one step in the direction of consequentialism and accept ex ante Pareto.
Robert Wiblin: Okay. Yeah, makes sense. It seems kind of whenever in moral theories you try to create kind of asymmetries or like nonlinearities then you’re at risk of someone pointing out this like odd case where that produces super counterintuitive conclusions. Do you think this is like a general thing?
Will MacAskill: Yeah, absolutely. I mean it’s quite striking in moral philosophy how many people, who are consequentialists, are classical utilitarians, which in a sense is a very narrow range within consequentialism. And I think the underlying explanation is that for people who as a matter of methodology are sympathetic to the idea that theories should be simple, but that means to prefer linearities rather than asymmetries and prefer continuities rather than discontinuities. That same principle, over and over again, ends up leading you on a variety of issues to classical utilitarianism.
Robert Wiblin: Yeah. Interesting. Why classical? Why focused on hedons rather than something else?
Will MacAskill: Well, I think that in the case of hedonistic utilitarianism, you have a clear boundary between what things are of value and what things aren’t. Namely those things that are conscious. Independently you would think that’s like a pretty important dividing line in nature. The conscious things and the non-conscious things. If you’re a preference utilitarian, though, well, does a thermostat have a preference for being above a certain temperature? What about a worm, a beetle? Where do you draw the line there? It’s like very unclear. Similarly if you’re an objective list theorist, so you think flourishing and knowledge… I mean, does a plant have knowledge? Like it can flourish, it has health. Why does that not count? And normally it’s the case that you’re inclined to say, “Oh, well, only those entities that are conscious, for them, then you should have whatever satisfies their preferences or this thicker set of goods.
Robert Wiblin: But then we’re back at a hedonistic account. Why don’t we just say the whole thing was hedons all along?
Will MacAskill: Yeah, exactly. Why is it this kind of weird disjunctive thing?
Robert Wiblin: If you have consciousness, then a bunch of these like non-conscious facts matter. That’s like less intuitive than if you have consciousness then the consciousness matters.
Will MacAskill: Yeah, exactly.
Robert Wiblin: Yeah, interesting. Okay.
The case for strong longtermism [0:55:21]
Robert Wiblin: So let’s just talk quickly about this other paper you’ve been working on with Hilary Greaves now called “The Case for Strong Longtermism”. We’ve talked about longtermism a lot on the show and no doubt it will come up again in future. So we probably don’t want to be rehearsing all these arguments again or our listeners will start falling asleep. Is there anything new in this paper that people should maybe consider reading it to learn?
Will MacAskill: Yeah, so I think the paper, if you’re already sympathetic to longtermism, where we distinguish longtermism in the sense of just being particularly concerned about ensuring the long term future goes well. That’s analogous with environmentalism, which is the idea of being particularly concerned about the environment. Liberalism being particularly concerned with liberty. Strong longtermism is the stronger claim that the most important part of our action is the long-run consequences of those actions. The core aim of the paper is just being very rigorous in the statement of that and in the defense of it. So for people who are already very sympathetic to this idea, I don’t think there’s going to be anything kind of novel or striking in it. The key target is just what are the various ways in which you could depart from a standard utilitarian or consequentialist view that you might think would cause you to reject strong longtermism, and we go through various objections one might have and argue that they’re not successful.
Robert Wiblin: Are there any kind of new counterarguments in there to longtermism?
Will MacAskill: I think there’s an important distinction between what philosophers would call axiological longtermism and deontic longtermism. Where, is longtermism a claim about goodness, about what the best thing to do is, or is it a claim about what you ought to do? What’s right and wrong? So if you’re a consequentialist, those two things are the same. The definition of consequentialism is that what’s best is what’s like–
Robert Wiblin: Yeah. No wonder this distinction has never seemed that interesting.
Will MacAskill: Yeah. But you know, you might think–
Robert Wiblin: Something could be good but not required or–
Will MacAskill: Yeah. So perhaps it’s wrong for me to kill you to save five, but I might still hope that you get hit by an asteroid and five are saved, because that would be better for five people to live than one person to live, but it’s still wrong to kill one person to save five.
Robert Wiblin: So axiology is about what things are good and the deontology thing is about like the rightness of actions?
Will MacAskill: Yeah. Normative theory or deontic theory.
Robert Wiblin: Okay. So what’s the two different longtermist things here?
Will MacAskill: So just axiological strong longtermism and deontic strong longtermism.
Robert Wiblin: Again, it’s just about like consequences versus like what actions are right.
Will MacAskill: Exactly.
Robert Wiblin: Alright. So we’ll stick up a link to that paper for anyone who wants to read it.
Longtermism for risk-averse altruists [0:58:01]
Robert Wiblin: Let’s move on to another feistier paper that you’ve been working on with, in this case, both the previous coauthors, Hilary Greaves and Andreas Mogensen, called “Longtermism for risk averse altruists”, which I guess is in this like long list of papers you’re writing which just makes me feel like I was right all along. These incredibly elaborate things that you’re going through to just make me feel very smug about how I was on the ball early on.
Will MacAskill: I mean we’ve really not been aiming to just write a lot of papers that are defending longtermism. We all know the total utilitarian case for longtermism, what happens if you modify some of these premises. And in some cases you might think, “Oh this really does make a difference”. So risk aversion is one case. I think it’s quite intuitive or certainly something people say where… Let’s say you’re deciding between funding some existential risk reduction intervention or funding some global health program and you might think, “Well I know that I can do some amount of good when funding the global health program whereas this existential risk things seems very uncertain”. You might acknowledge if you are successful, the consequences would be like really really big and it’d be really great, but it’s just so unlikely that I want to have a safe bet and I’m risk averse.
Robert Wiblin: Yeah. I got this exact objection in an interview a couple of months ago and it was like, I mean I tried to explain why it was wrong, but it’s like it’s not super easy in words.
Will MacAskill: No, I mean the paper itself, even though the core of the paper came from me, I got asked about it in Cambridge and I was trying to explain it and I had to say, “Yeah, sorry, you’re just going to have to look it up cause I’ve forgotten myself”. So it actually does get quite tricky quite quickly.
Robert Wiblin: So the setup here is that there’s some people doing something that seems like very reliable and safe, like distributing malaria nets, and you’ve got these other people trying to like prevent nuclear war and there’s this sense in which isn’t it such a risky thing to do with your career to try to prevent nuclear war cause you’re almost guaranteed not to succeed whereas you will distribute the bed nets and some lives will be saved. On the other hand you can flip it around and say well there’s this other intuition that if you’re risk averse, well shouldn’t you be reducing big risks? And then we’ve got this bit of intention and intuitions and I guess you’d try to more rigorously define risk averse about what and then like what does that actually lead to?
Will MacAskill: Terrific. So the first thing to distinguish is: is what I care about myself making a difference or is what I care about that good things happen? So if what I care about is myself making a difference, then absolutely. A standard account of risk aversion would say that you prefer the guarantee of saving one life. Let’s say it’s a 1% chance of saving 110 lives from the nuclear war example. Obviously it’s a smaller probability but a larger amount of good. However, as an altruist, should you care about that you make the difference and no, I think–
Robert Wiblin: It’s like, “Yeah, hundreds must die so that I can know that I made a difference”. It’s kind of the classic donor-focused altruism.
Will MacAskill: Yeah, exactly. I mean it’s quite antithetical to what effective altruism is about. I think in “Doing Good Better”, I mentioned the example of a paramedic is coming to save someone’s life. They’re choking or something. They need CPR and you push them out of the way and you start making the difference yourself. So I mean, in order to make this clearer, just imagine you’re just going to learn about one of two scenarios. In the first scenario, some existential risk happens, but you saved dozens of lives yourself. In the second scenario, no existential risk happens and you don’t do any good yourself and you’re just going to find out which of those two things are true. Which should you hope is the case? Well obviously you should hope that no existential catastrophe and you not doing anything is the thing that happens, but if that’s what your preferences are, then your preferences aren’t about you yourself making the difference. In fact, what you actually care about is good stuff happening.
Robert Wiblin: Yeah. Although you might say there’s some trade off, where it’s like people would be willing to accept a somewhat worse world in order to have had a bigger impact themselves. I mean maybe people might concede that they have some like selfish desire to like be able to reflect on their own life and feel proud about what they did, even if the world is worse.
Will MacAskill: Okay, great. Yeah, so it could be that you’ve got what we might call impure altruism. So if in part what’s driving me is a meaningful life or something and maybe that’s tied to how much good I actually do and there’s diminishing returns to the meaning I get from doing more good. So a life where I saved 10 lives is perhaps just as meaningful as a life where I have saved a hundred lives or very almost as much.
Robert Wiblin: We’re thinking of good done like income or something like that and it’s like, I want to make sure that I make a decent amount of income and I want to make sure that I do some amount of good.
Will MacAskill: Yeah, and then again, I do want to say if people are going out and actually using their money for good things and they’re like, “Yeah, well it’s a mix of motivations”, that’s like fine. It’s the case for all of us. Exactly. But, if we’re trying to defend this as, “No, this is actually the altruistically justified thing”. That’s different if we’re doing the moral philosophy thing. That’s a different argument. Yeah. But then it seems very hard to justify the idea that no, altruistically, what you should care about is how much good you personally do. Instead, what you should be caring about is just how much good gets done.
Robert Wiblin: Yeah. Is there anything that could be said in defense of that view from a philosophical stance? I suppose non-realism or nihilism or just like giving up.
Will MacAskill: I mean, well then if it was just nihilism then there’s no case where you ought to do something. Yeah, I think it’s pretty hard. I don’t know anyone who’s defended it for example.
Robert Wiblin: I see. Okay. So we’re going to say if what they were saying is that they’re risk averse about their personal tangible impact themselves, we’re going to say, “Well, that’s all well and good, but it’s not actually something that’s defensible in moral philosophy”. What about if they have a different understanding of risk aversion? So they’re thinking something more about like risk aversion about the state of the world perhaps.
Will MacAskill: Yeah. So now, instead, and this is the perspective I think that you should take as a philanthropist, is this impartial perspective. So you’re just looking at different ways the whole world could go, and I’m now risk averse with respect to that. So for the whole world, I would prefer a guarantee of that world getting to, let’s say, a hundred level of value. A hundred units of value, whatever that unit is, rather than a 50-50 chance of 210 units of value and a 50% chance of zero units of value. And that again is a perfectly coherent view. It’s not a utilitarian view, but it’s perfectly coherent. And there’s a couple of different ways you could cash it out. So you could say that, well, that’s because value has diminishing returns. So just in the same way as money has diminishing returns for you, somehow total amount of value it also has diminishing returns. People often don’t like to say that.
Robert Wiblin: I mean it has some odd consequences because then if there’s a flourishing alien civilization somewhere far away, I guess the world matters less because it’s like they’ve added all this welfare to the universe and so now all of our actions have just become less morally significant. I don’t think that that’s very intuitive.
Will MacAskill: That’s right, yeah. So the technical term for this is that it’s non-separable. So in order to decide what I ought to do, I need to know not just about the thing that’s right in front of me, but also just how many aliens are there, how many people in the past?
Robert Wiblin: Yeah, you might find that there were more people in the past and this then makes you more risk averse or something.
Will MacAskill: Yeah, it would make you take actions that are more risk averse seeming.
Robert Wiblin: Yeah. I guess it depends on the shape of the risk averse curve.
Will MacAskill: So that’s one thing you can do. I mean, but more promising is a different decision theory which just cares about risk. So again I’m going to hope that I cite this correctly, but I think it’s “Rank-dependent utility theory” by Quiggin and then a view that is formally the same but with a different philosophical interpretation, which is Lara Buchak’s “Risk-weighted expected utility theory,” where the idea is just you care about risk and so each little increment of probability onto a possible outcome doesn’t count the same. So perhaps you take the square of the probability when you multiply that by an outcome. But here again, you don’t necessarily end up with the conclusion that you should prefer the bed nets over preventing nuclear war. And that’s because there’s two kind of sources of uncertainty that go into whether you’re going to do good by trying to prevent a nuclear war.
Will MacAskill: Like there’s two ways you can fail to do that, to have an impact there. One is if there was never going to be a nuclear war; in fact we were going to achieve this glorious future whatever you did. A second way in which you could fail is if there was going to be a nuclear war, but your actions are ineffective towards it. And if you’re just a standard expected utility maximizer, that difference doesn’t matter at all. But it does, surprisingly, if you’re risk averse. And the way to see that is because supposing we get to this glorious future, the future’s really good, and then it also has this extra benefit which is that someone’s life in a poor country was saved.
Will MacAskill: Well, you’re adding a bit of good onto what’s already an extremely good outcome and so that doesn’t contribute very much.
Robert Wiblin: Ah, okay. Nice! I haven’t read this paper listeners if you couldn’t tell, perhaps.
Will MacAskill: I thought Rob was faking it. You looked so interested.
Robert Wiblin: I was just like the penny slightly dropped.
Will MacAskill: And I thought is Rob acting or not?
Robert Wiblin: You’d never be able to tell, Will.
Will MacAskill: Yeah. Whereas if instead what’s happening is that we’re almost certainly doomed and it’s just that you could make the difference, then when you add that little bit of benefit you’re actually adding on to a bad world where we have very little total value and that contributes a lot.
Robert Wiblin: So it’s more valuable to save the life in the case where extinction is highly probable because the world as a whole is worse and so saving that person’s life is adding more moral value in some sense.
Will MacAskill: Yeah, exactly.
Robert Wiblin: Interesting. Okay. Then how does that play out? I guess this is seeming like it’s going to be a bit of complicated math here to see exactly what this pans out to.
Will MacAskill: Yeah, it means that because in any realistic situation, if I’m unsure about whether I’m going to have an impact by trying to prevent a nuclear war, I’m going to be unsure for both reasons. Both because maybe there’s just not going to be a nuclear war, but also maybe anything I do is going to be ineffective and we do do a little bit of just maths. Throw in some plausible numbers. So if you have quite extreme risk aversion, which is where rather than multiplying the probability of an outcome by its value to contribute to expected value, you take the square of the probability and multiply it by a value. So that’s actually quite an extreme risk averse view.
Will MacAskill: Then if you think it’s more than 50% likely that we’ll get some really good future. If you’re risk averse that ends up favoring extinction risk reduction. But yeah, it isn’t striking that risk aversion can make you favor the nuclear war. And then one thing I should say on this is that this has all been premised on the idea that the future is just either neutral in value if we continue to exist going into the future, it’s either neutral or positive.
Robert Wiblin: It seems like it should be stronger–
Will MacAskill: But if we include the possible negative outcomes, the case for risk aversion caring about longtermism gets stronger.
Robert Wiblin: But inasmuch as there’s a possibility that the world as a whole is just really bad then doesn’t that make the incremental benefit of saving one person’s extra life more valuable because I suppose it depends on whether the distinction’s between like the best and worst case that you can try to bring about in the long term is very large.
Will MacAskill: Yes. So if it was the case that you are just now concerned about either reducing extinction risk or saving someone from malaria then you’re right that that increment of benefit would count for more if within this terrible world in the future, and so that probably will favor bed nets over reducing extinction risk for sure. But there’s something else you could probably do, which is try and reduce the chance of terrible futures which is still the longtermist thing to do. And that’s going to be overwhelmingly important. Like any tiny decrease in variance of the value of the future or decrease of the worst case scenario is going to be overwhelmingly important and more important the more risk averse you are.
Robert Wiblin: I see. Okay. So this actually wasn’t the argument that I, for example, made when I was asked this question a couple of months ago. The reason that I gave for saying that risk aversion doesn’t favor the bed net distribution was just that this person is ignoring almost all of the effects of the actions and so they’re thinking, “Oh it’s safe cause I just saved a few lives”. But if they think about all of the ripple effects, all of the longterm indirect effects of that action, then in fact what they’ve done is just incredibly unpredictable and maybe it has very good effects or very bad effects. In fact, it’s a very risky action in a sense. Just as trying to prevent nuclear war is very risky because it’s such a high chance that you’ll fail and there’s all then this chance to have a big benefit. Do you see where I’m going with that?
Will MacAskill: So I think when I heard you say that it was, I remember thinking it was just the argument or response that I would also make. But I think it’s a different issue. So because trying to reduce nuclear war also has all of these long-run unpredictable effects. So they both are risky in that sense.
Robert Wiblin: It’s like whether they create good or bad outcomes for both of them is extremely hard to foresee and there’s a huge distribution and so they’re both very risky and it’s not the case that one is less risky in that sense than the other, and so that kind of cancels out. So just go for the highest expected value.
Will MacAskill: Well I was saying they’re both very risky, but I think you can still say that. So let’s say there’s the foreseeable effects and the unforeseeable effects. The foreseeable effects are one life saved vs. one in a million chance of huge value and then you can say, “Well there’s a certain amount of riskiness that comes from the unforeseeable effects, the same from the nuclear war or from the bed net distribution, but then we can just isolate the foreseeable effects which are less risky for the–”
Robert Wiblin: Yeah. Isn’t there a sense in which the work on the nuclear war thing is less risky cause there’s a 999,999 out of a million chance that you’re going to accomplish nothing and that nothing will happen.
Will MacAskill: But you wouldn’t, for any given action, you won’t accomplish nothing. You’ll have all of these unpredictable effects.
Robert Wiblin: Yeah. Maybe I’m just not thinking about this right. But it seems that kind of cancels out to me, but I guess you’re saying that the unpredictable effects are even larger in the nuclear case.
Will MacAskill: Well no, I’m saying for both the bed net case and the nuclear case, for anything we do and no matter what pans out, we’ve got this stream of unpredictable effects and let’s say we don’t have any reason for thinking that they’re going to be more or higher variance unpredictable effects from bed nets than nuclear. So in all cases we have this stream of unpredictable effects and then no matter what way the world is, we get like plus one benefit of life saved from bed net distribution across all different possible states. In the nuclear war case, we get plus 1,000,00 in the 1 in a million case and zero in all the others, and that is just strictly adding.
Robert Wiblin: I see. Yeah. Maybe there’s a sense in which with so much uncertainty or such a wide distribution to begin with, the incremental riskiness or the incremental variance that’s added by the foreseeable effects doesn’t feel as large. Maybe that’s something that’s going on?
Will MacAskill: Yeah. I’m now trying to remember exactly what the question asked was because it seems if the argument is, “Oh well I want to do what I’m confident does good”, which I said I’ve been cashing out the underlying intuition in one way, which is in terms of risk aversion. But yeah, if instead it’s, “No, I want to do something where I know I’m going to do good”, well then that argument doesn’t work–
Robert Wiblin: “Because there’s this high chance that they would do harm accidentally.
Will MacAskill: Yeah, exactly. There’s just so many effects and picking out one aspect does not reduce the uncertainty very much at all and I think that’s intuitively quite important. Like perhaps also from a nonconsequentialist perspective as well where we are responsible for all of those effects and actually if we’re just saying, “Well yes, we care about long-run future, we hope it goes well, and yes also these activities that have short-run benefits will have very long-run effects, but no, we’re not going to worry about that”. That seems wrong.
Are we living in the most influential time in history? [1:14:37]
Robert Wiblin: Alright. Let’s push on to this very interesting and kind of provocative blog post you wrote on the effective altruism forum back in September called, “Are we living in the most influential time in history?”. In the post, you laid out a bunch of arguments both in favor and against the idea that we’re living at a particularly important time in history and I guess kind of ended up concluding that people might, at least in the effective altruism community, might be really overestimating the chance that this particular century is especially important in the scheme of things.
Robert Wiblin: So I thought this post was one of the best posts that’s been put on the effective altruism forum. Not to flatter you too much, Will. But in addition to that, I thought the comments section was amazing. There were just half a dozen comments where I’m like, these could be posts in their own right with new insights and then great responses. And people were also being extremely polite as well, even though they were disagreeing quite strongly.
Will MacAskill: Yeah, I loved it. I was really happy I managed to get Toby out of the woodwork and Carl too.
Robert Wiblin: Yeah. So I guess this is a pretty convoluted issue. We might get a little bit tangled up, but I guess it would be good to, I guess talk about what you present in the post and then maybe work through some of the top comments that I thought were also very insightful and I guess, how you respond to that and where things stand now?
Will MacAskill: That sounds great.
Robert Wiblin: Maybe first, what do you mean by ‘the most influential time in history’, and why does this question matter?
Will MacAskill: I do think I’m running together two slightly different ideas that I think are worth picking apart and if I wrote the post again, I probably would. So one is just in some intuitive sense of importance, don’t even really need to define it, but on certain views that are popular in the effective altruism community, like the Bostrom-Yudkowsky scenario that’s closely associated with them — I don’t want to claim that they think it’s very likely. On that view, there’s a period where we develop artificial general intelligence that moves very quickly to superintelligence and either way, basically everything that ever happens is determined at that point where it’s either the values of the superintelligence that then it can do whatever it wants with the rest of the universe. Or it’s the values of the people who manage to control it, which might be democratic, might be everyone in the world, it might be a single dictator.
Will MacAskill: And so I think just very intuitively it sounds important. Intuitively, that would be the most important moment ever. And in fact there’s two claims. One is that there is a moment where almost everything happens, where most of the variance of how the future could go actually gets determined by this one very small period of time, and that secondly, that that time is now. So one line of argument is just to say, “Well, it seems like that’s a very extraordinary claim”. We could try and justify that. Then there’s a question of spelling out what extraordinary means, but insofar as that’d be a really extraordinary claim we should have low credence in it unless we’ve got very strong arguments in its favor. Then there’s a second argument or second understanding of influential that is very similar, but again, different enough that maybe it’s worth keeping separate, which is just the point at which it’s best to directly use our resources if we’re longtermists, where that’s just how does the marginal cost-effectiveness of longtermist resources vary over time is the question.
Will MacAskill: And here again, the thought is, well, we should expect that to go up and down over time. Perhaps there are some systematic reasons for it going down. Perhaps there’s some systematic reasons for it going up. Either way it would seem surprising if now was the time where most longtermist resources are most impactful and what that question is relevant to is that it’s one part of, but not the whole of, an answer to the question of should we be planning to spend our money now doing direct work or should we instead be trying to save for a later time period, whether that’s financial savings or movement building.
Robert Wiblin: So I guess we can imagine centuries where I suppose they’re very intuitively important, but they are not important in the second sense because say there was nothing that could be done. So a lot of uncertainly gets resolved, but like an extra person couldn’t have made any difference.
Robert Wiblin: Say maybe we’re like definitely going use a random number generator to determine the future. There’s nothing you can do to stop that from happening. So a lot of uncertainty is resolved when the random numbers are generated.
Will MacAskill: Yeah, exactly. Or the second thing as well is just maybe no one could do anything about it. On the second definition, I also build in that maybe we just don’t know what to do. We’re just not reliable enough. So perhaps at the turn of the agricultural revolution, perhaps there were certain things you could have done that would have in principle, that would have had a very long run influence, but no one at the time would have been able to figure that out. So the argument would go. On that second sense then I would not count that as being influential either.
Robert Wiblin: Okay. In the post, you use the term hinginess I guess to describe this second sense of importance where it’s like a person can make a big difference if they act at that time, but you haven’t used that word yet. Is that cause you’re reluctant to kind of pin people onto this terminology that maybe we want to get rid of?
Will MacAskill: Yeah, I think we haven’t decided on terminology now. In one of the comments, Carl Shulman objected to the term hinginess because perhaps it just sounds a bit goofy, I guess. And so instead perhaps we should say leverage or something.
Robert Wiblin: Pivotalness or pivotality? Yeah.
Will MacAskill: Pivotality doesn’t really get across the idea that maybe we are really at a pivotal time, we just don’t know we are, we just aren’t able to capitalize on that.
Robert Wiblin: Okay. Alright. So to avoid entrenching some language, maybe we’ll just use “importance” in this case to describe the time when one extra person can make the biggest difference for this conversation.
Will MacAskill: Okay.
Robert Wiblin: Cause I think we need some term.
Will MacAskill: Okay.
Robert Wiblin: Cool. Quite a lot of different people have thought that it could be the case that this century could well be one of the most important in that sense. I guess Toby Ord is about to publish this book kind of making that claim. I guess Derek Parfit has kind of suggested that might be the case. It’s not only people in effective altruism. Lots of other people make the argument that this century could determine everything. It’s like climate change, war between US and China, we have nukes now. It’s like there’s some sense in which it’s kind of intuitive. Do you want to lay out the arguments both in favor and against the hypothesis that the next hundred years is going to be especially important or an especially good time to act to change the long-term?
Will MacAskill: Yeah. Terrific. One clarification I’ll make as well is that this is not an argument for saying for example that existential risk is low. So the argument from Parfit and Sagan and others is, “Well, we’ve developed nuclear weapons that’s ushered in a new time of perils where extensional risk is much higher”. I could actually take objection to the argument from nuclear weapons, but I’ll put that to the side.
Robert Wiblin: We’ll come back to that.
Will MacAskill: But, this would not be a particularly exceptional time if existential risk goes up and then stays up for a very long time for many, many millennia or something. Or if it went up and went up even further.
Robert Wiblin: Or it goes up and there’s nothing anyone can do.
Will MacAskill: Or if it goes up and there’s nothing anyone could do would also be the case. So it needs to be the case for this to be a particularly influential time and Parfit calls it the “hinge of history”. That’s why hinginess has come up. It needs to be the case that reducing existential risk is particularly high leverage now compared to other times. And so I give a couple of arguments against this. One is that just on priors, we should think it’s extremely, again, especially if we assume if we’re successful there will be a very large future, extremely large numbers of people in the future, then it would be really remarkable. A priori, it would seem extremely unlikely that we happen to be the people out of all this time that are so influential, are in fact the most influential people ever. Secondly then is how good is the quality if we’ve got this low starting prior, how good is the quality of the evidence to move us from that. And there there’s kind of two arguments, two related issues.
Will MacAskill: One is just we might intuitively understand as the quality of the arguments, where it’s not like we have empirical observations for this. It’s not like we have deep understanding of some physical mechanism that should really allow us to update very greatly. Instead it’s kind of generally going to be informal arguments and informal models of how the future will go. Then combined with the fact that I think we shouldn’t really expect ourselves to be particularly good at or particularly reliable at reasoning about things like this. We certainly don’t have positive evidence for thinking that people can reason well about something like this. And that then makes it hard if you’ve got this very low prior to have some correspondingly large Bayes factor that is a kind of update on the basis of argument that would move us to having say a 10% credence or more that we’re living at the most influential time ever.
Robert Wiblin: Yeah. Okay. So to recap that, you’re saying the future could be very long, so there’s a lot of potential future people, lots of potential future centuries. What are the odds that this one happens to be the most important out of all of them? We should guess that it’s low unless we have really good reasons to think otherwise and then you’re like, well, do we have this really strong evidence? It’s like, well we do see we do have a bunch of arguments for this, but they’re not so watertight that we couldn’t explain them seeming compelling to us as just being an error on our part. But in fact it seems like this century is especially important, but that’s just an illusion because we’re not very good at reasoning about this and we always have this very plausible alternative explanation. Whenever it seems like we’ve made a good argument that this century is really important, that we just don’t know what we’re talking about and we’re just mistaken.
Will MacAskill: That’s right, and if you’ve got a starting very low prior and then even somewhat unreliable mechanism to move you from that prior, well you don’t end up moving very much because rather than you being at this extremely unlikely time, it’s much more plausible, this kind of mundane explanation that actually just, we’ve made some mistake along the way. So you know, I give an example of suppose I deal a pack of cards out and you see a particular sequence. If it’s just a kind of random seeming sequence, you should update all the way from one in 52 factorial, which I think is like one in 10 to the 68 or something all the way up to high credence in that and that’s amazing.
Will MacAskill: So you really can make huge updates from some very low priors. But if I deal a set of cards in perfect order, you should conclude that well probably the pack of cards was not well shuffled to begin with. Probably it wasn’t shuffled, in fact. So you kind of question the underlying starting assumptions.
Robert Wiblin: Yeah. You’ve slightly jumped the gun here though. What are the object level arguments that people make that this century is especially important so we can maybe assess, “Do we have a really good grip on them? Are they compelling evidence?”.
Will MacAskill: Great. So I think there’s two different sets. So I distinguish between inside view arguments and outside view arguments and the inside view arguments are, for example, the view associated most prominently with Bostrom-Yudkowksky but also more widely promoted, that we will develop AGI at some point this century and that’s the most pivotal event ever. Perhaps because AGI very quickly goes to superintelligence and whoever controls superintelligence controls the future. A second way in which the present time might be particularly influential is if we’re at this time of perils. So we’re now at the point in time where there’s sufficient destructive power that we could kill ourselves as in render humanity extinct. But before the time where we get our act together as a species and are able to coordinate and reduce those risks down to zero. So those are two… I call them inside views. The distinction isn’t necessarily very tight. Then there are a bunch of outside view arguments too. So again, let’s just assume that the future is very large or at least if we’re successful, the future is very large in a sense that there are vast numbers of people in the future.
Will MacAskill: Well, we do seem to be distinctive in lots of ways then. We’re very early on. We’re in a world with very low population compared to future populations. We’re still on one planet. We’re at a period in time where some people are aware of longtermism, but not everybody in a kind of Goldilocks state you might think for having an influence. So here are a whole bunch of reasons why even without considering any particular arguments, your prior shouldn’t be extremely low. It should be kind of considerably higher. And on that latter side, so one bit of confusion I think in the discussion was what exactly we were using the word “prior” to refer to, where I’m referring to your ur-prior, your fundamental prior when we do that.
Robert Wiblin: That’s like before you’ve opened your eyes. Before you’ve seen kind of anything?
Will MacAskill: Yeah, before I’m even aware that I’m on Earth. I mean the way I was thinking about it, it’s a function from if I believe that there’s going to be a billion people ever, then I believe there’s one in a billion chance of being the most influential person. Similarly, if I believe there’s going to be a hundred trillion people, I believe there’s a one in a hundred trillion chance that I’m the most influential person. But then the way I would understand it is I open my eyes, I see that I’m on Earth and so on and absolutely I should update a lot in favor of being at a particularly influential time. Oh, another key thing is being at a period of unusually high economic growth where I think there’s very good arguments why we can’t have–
Robert Wiblin: Can’t sustain it forever because eventually we’ll run out of atoms.
Will MacAskill: Yeah. You end up having like a civilization’s worth of value for every atom and maybe it’s the case that our imagination is failing us there. But I find it pretty compelling that you can’t grow at 2% per annum…
Robert Wiblin: The environmentalists are right.
Will MacAskill: I mean they are in the sense that it’s an S curve at some level of scale.
Robert Wiblin: Yeah, it has to stop at some point.
Will MacAskill: Then some people say, “Well, what I had which is the uniform prior, that is, for any total population over the universe, I should think it’s one over N, where N is the number of people, chance that I’m the most influential, but also the funniest, the most beautiful, et cetera. They instead say, “Well no, you should have some prior where earlier people are more likely to be influential or other kind of models that would give a different kind of result. I agree that we should have degrees of belief in those models. The language I used was such that that would count as an argument for moving from your prior. But I don’t actually think much hangs on what word we use there.
Robert Wiblin: Yeah. Okay. So there’s a lot to unpack here, but just first to give a general objection maybe to the style of argument that’s being made here. So we look around, we see all of these reasons why it seems like this century would be really important and you said “No, but we should have this prior. This preexisting belief that it’s exceedingly improbable that this century is the most important. So maybe there’s only a one in a million chance cause there could easily be a million centuries to come and you’re just going to have this uniform prior across them. So there’s something that’s kind of arrogant about saying that this century is going to be the most important. Like I’m living at the most important time. It’s kind of suspicious. But there’s also something that’s kind of suspicious and arrogant about being like, “Well, I just know — priors!” before I’ve even looked at anything, it’s like, I’m almost certain that we’re not at the most important time and like none of these things that you can say to me, like even if I’m like in the room, we’re about to start a nuclear war, they’re about to press the button I’m just like, “No, I still don’t think because my prior is so low, I’m going to think that this is probably a hallucination”. It kind of reminds me of the presumptuous philosopher, like mentioned by Nick Bostrom.
Will MacAskill: But having an extremely low prior doesn’t mean that you can’t update to high credence.
Robert Wiblin: Isn’t it quite hard, potentially? Especially if you’re going to say, “Well, there’s this alternative explanation that maybe I’m schizophrenic and maybe I’m just misperceiving things and I have delusions of grandeur”. It’s going to make it always very hard.
Will MacAskill: I think we regularly update from astronomically low priors to high credence kind of all the time. So I gave you one example which was dealing a pack of cards where you know your prior is one in 52 factorial.
Robert Wiblin: So you can get a very big update. Yeah. But in this actual case, in this scenario of saying like is this the most important century?
Will MacAskill: But then it’s quite unsurprising if you’re saying, “Well, I’m now sampling out of, let’s say it’s a million centuries, and you only get a thousand or even less. But let’s say like a thousand out of this whole huge distribution. Then it’s not very surprising I think, if we just can’t ever get a extremely high… Cause you’ve got all these things in the future that we just have no clue about.
Robert Wiblin: Yeah, I guess. So, I think that it is very suspicious to have so much work being done just by the prior that we’ve chosen to use. Especially given how unsure we are about what prior, like how we ought to select these priors in general. And then I guess like when we actually dive in it seems like then this suspicion about like can so much work be getting done by the prior? It’s kind of vindicated because I think we see a bunch of reasons that are offered, maybe we should talk about them now, by Toby and Carl that give us good reasons to think that actually we should use a different prior, which makes it seem much more realistic that this century is especially important.
Will MacAskill: Yeah, I mean I definitely, just to clarify, like you know, you find yourself, you’re now president of the United States and it’s like, “Okay, we built the AGI, Rob and now it’s like code in”. It depends on your antecedent credence that you’re hallucinating or something. But let’s assume it’s very low or that you’re in simulation where it’s very low. Then you know, then that’s extremely strong evidence. But, that’s not the set of evidence that we have.
Robert Wiblin: The evidence we have is way weaker than that. I suppose maybe some people would disagree, they’d say “No, the evidence that we see is almost as compelling as that”.
Will MacAskill: Okay. But this is why I was really happy about the post, cause I thought that was the case. I thought that’s where the disagreement between me and some others lies. But actually it seems like it wasn’t.
Robert Wiblin: It’s about the prior?
Will MacAskill: Yeah. Toby seemed to think the prior he suggested would give like a, I don’t know, 5%, 10% kind of prior that was just the most important ever and that was striking for me whereas I thought other people would say, “No, it’s just this amazing evidence”.
Robert Wiblin: Okay.
Will MacAskill: So, yeah, but maybe let’s move onto what I characterize as the outside view arguments.
Robert Wiblin: Yeah. Alright, let’s go for that.
Will MacAskill: Yeah, I mean I think maybe another way insofar as this caused a bit of confusion, perhaps figure out some way of rewriting the article so that people focused a bit less on the choice of uniform prior. Because I do think that–
Robert Wiblin: That’s a separate question.
Will MacAskill: Well yeah, so we’re in this weird position. Assume the future’s very big. We’re definitely in this weird position in a variety of ways. And then the key question really is like how exactly do you kind of update on the basis of that? And I really think like if your prior is one in a million or one in a billion or 1 in a trillion, it doesn’t actually make that much of a difference cause you get correspondingly large update in virtue of being very early on, or also like very early on, and then there’s all these things that are correlated. Being early on, being a small population, being on a single planet, perhaps being at a period of high economic growth is separate as an argument. And so then we got into some meaty discussion of well how should one be setting priors in that context. And so Toby’s suggestion was to use Laplacian law of succession or actually the Jefferys prior.
Will MacAskill: But the idea here is, so in the classic law of succession, there’s a question of well, what’s the chance that the sun doesn’t rise tomorrow? This is an event you’ve never seen happen. Well, let’s say you’ve seen it rise a thousand times, what should you believe the chance of it rising tomorrow is? You might think, Oh well it’s never happened at a thousand times, so it’s zero out of a thousand — zero chance. But that doesn’t seem right. So it’s basically just the way it works for Laplace is to assume that before we ever started this kind of period of observation, there were two observations, one of which came out positive. So this sounds weird, cause in this case, the positive thing is the sun not rising. So just as we set our prior, we act as if there were two days, it rose on one day and it didn’t rise on another day. The Jeffreys prior is just slightly different where it’s just half a day each.
Will MacAskill: So it’s a little less conceptually clean but mathematically fine. And then Toby says, “Well that’s how we should think about whether we’re at the most important time ever. We should think of ‘living at the hinge of history’ as being this event that could happen at any time when we start with this Laplacian prior and then that, when you do the maths, ends up giving you a conclusion that given that we haven’t seen it in the thousand centuries so far, you end up concluding it’s like a 5% chance. I think that’s about right, that we’re the most important century now.
Robert Wiblin: So it’s like if you find yourself in the first century, then you’re like, “Oh, it’s one in two” and then in the next century, it’s one in three, then in the next century it’s one in four and then it just keeps on going, and so, but if there’s been a hundred centuries, wouldn’t that get you to much less?
Will MacAskill: So if it’s just done in time then it would be one in 2000 that it’s now. If it’s done by population then it’s much higher.
Robert Wiblin: Because we think something like 5% of all people who’ve ever lived are alive now.
Will MacAskill: Yeah. In fact, it’s a little more than that.
Robert Wiblin: So I’m not sure whether I want to go down this track, but one of many things which I’m nervous about in this whole area is that we’re talking about sampling out of the set of all humans and I’m just like how do you define humans? Cause species are constantly evolving. We’ve taken what might be a very practical biological thing and then now we’re trying to make like real inferences about like the future of the world by like species differentiation over time. There’s something very suspicious about that.
Will MacAskill: I think that is one argument that I think favors my prior over Toby’s prior–
Robert Wiblin: Cause then it seems more arbitrary maybe–
Will MacAskill: Well it’s just like, what’s the chance I’m the most influential person? What’s the chance I’m the most influential mammal. Let’s use a different term that’s less loaded, but like what’s the chance I’m the most beautiful human? Well, one in however many humans there are. What’s the chance that I’m the most beautiful mammal? Well, again from priors, one out of all the mammals alive. It just kind of seems right to me as the way of setting priors in this context. But if you’re thinking of the start of humanity as like this event for how long humanity will last for or something, then with the Laplacian prior, it really matters when the start date is because you know your prior’s 50-50 that that’s when everything happens.
Robert Wiblin: Yeah, that’s interesting.
Will MacAskill: If the question was just, “Is this the most important century in all human history?”. If instead the question had been “Is this the most important century in the history of life?”. Which again, maybe there you could make more of an argument that the move from prokaryotes to eukaryotes was as important.
Robert Wiblin: Well, just the first self replicating RNA. Surely that’s the most important. The fact that it happened at all. But why are you assuming that the most important century for humans hasn’t already happened? It seems like very possible that has happened.
Will MacAskill: Alright, yeah. When you said ‘you’, you meant Toby.
Robert Wiblin: Oh okay, right.
Will MacAskill: Cause I’m not a fan of this way of prior setting. So Toby would have to make the claim that, well, we’ve observed over time that it hasn’t happened yet, but it is actually from his prior, he should think it’s extremely likely that it is in the past.
Robert Wiblin: I think there is a good chance that it is. It seems like we can point to various times that were like the most important.
Will MacAskill: I guess the key question there is whether people were in a position to judge and a bit of a tricky thing about this whole discussion is well what exactly is the counterfactual that you’re asking? I suggested, well imagine a longtermist altruist so and allowing them to miraculously have the set of values but nothing else changes. What could they have done?
Robert Wiblin: Yeah, interesting. So what is the intuition that Toby has for like using that prior in particular?
Will MacAskill: Well, I’m probably going to do a very poor job of defending it. I see him really as making actually just making a judgment about a different claim, which is, what’s the probability that some arbitrary event happens that has never happened before? Like the extinction of the human race or something. Then at least I can start to get some reason for thinking–
Robert Wiblin: Right. So that actually does make sense. Okay, so somehow we just know that we’re in the first century of humans and we also know nothing about like the general rate of species extinction and we’re like, what’s the odds of extinction this century? Well, given that we know nothing, I guess it’s 50-50. That’s just the uniform prior. It’s like an ignorant prior and then it’s like the next century like, okay, well we haven’t seen it last century, so what are the odds now? So I guess it’s got to go down and so yeah, this Laplacian thing is I think what you get if you have a uniform prior between zero and one. That’s the intuition.
Will MacAskill: Exactly, and the Jeffreys prior is kind of bucket shaped. So think about the uniform prior and then just reduce the middle and increase the outside. And the argument for the Jeffreys prior is that it’s scale invariant with respect to the parameter.
Robert Wiblin: Alright. So Toby wanted to use this Jefferys prior, which I guess would then make it much more easy to believe that this a century is especially important. Carl had this other great comment which I think pointed out like specific reasons why we might think that it’s way more likely that the most important century out of all human history, or all history of agent-like life would come much earlier, which I found very persuasive and I think maybe should cause us to think that maybe actually Toby is right. That maybe it’s not exactly that, but it should be a prior that gives much more weight to early centuries and makes us much more willing to believe that this century is especially important. I guess I should note that here technically we only need to care about whether this is the most important century out of all of the centuries to come. It doesn’t actually matter about the past because that’s kind of water under the bridge at this point, at least for deciding our actions.
Will MacAskill: At least for deciding our actions. I think it can be important for–
Robert Wiblin: Trying to figure out what is the generating process and just generally understanding what’s going on in human history and so on.
Robert Wiblin: Do you want to explain what Carl had to say and how persuasive you found it?
Will MacAskill: Yeah, so again, I saw Carl as adding to my list of outside view arguments where, you know, I’d mentioned that we’re very early on and he points out that like, well if you just over time got some point of having early, you know, lock in of values or going extinct, well if you’re born at a later stage then there have been all of those events that are already behind you and that’s kind of reduced your probability of being at a later stage. So that’s kind of one line of argument. Second is just there’s going to be vastly more people in the future. A third is being at this Goldilocks time when there are some people with longtermist values, but not everyone, and the claim being that in the future, well, I can’t remember if he makes this claim, but one could. That in the future, one should expect everyone to have longtermist values.
Robert Wiblin: I guess I don’t, but it seems like there could well be more people than now.
Will MacAskill: Yeah. I mean, well there’s this argument that I’m like somewhat skeptical of, but has some plausibility that well, people with a zero rate of pure time preference will win out eventually. If we’ve got thousands or millions of years of cultural evolution, then those who are most patient will ultimately accumulate more resources.
Robert Wiblin: Okay. So maybe my understanding of Carl’s objection was… so there’s a bunch of different things. So one is he’s saying, “Let’s imagine that the model is that in any given century from the beginning, there’s like a one in a thousand chance or one in 10,000 chance of that century, like locking everything in because that century can affect like all future centuries”. So let’s say there’s a chance that, like a one in 10,000 chance just in the first century that we go extinct. Then that obviously like determines or that like locks in everyone else cause they don’t exist. Then he’s like, well if you just like carry that forward then it seems like the distribution of like most important centuries starts out like very high and then it goes down pretty quickly because it’s just like those future centuries, once you’re at the millionth century, there’s just a very high chance that it’s already been determined by something in the past.
Robert Wiblin: Then if we have that kind of prior over which centuries are most important, then that gives us a much higher credence that this century is much more likely than a thousand centuries from now or a million centuries from now because it’s just overwhelmingly likely that if the world hasn’t been locked in or our future hasn’t been determined yet, then it will have been by the time we get to them. Okay, and then we’re like, “But why is it that it’s our current generation can affect the very long term future? It’s a couple of different reasons. So one is that I guess now we have like weapons that can destroy things. We’ve got like nuclear war. Potentially we’ll develop even more destructive technologies in the future. We’ve also got the fact that we’re getting much better at reproducing values over time. So in the past, like every generation, there’s like a whole lot of random variation that is thrown up between from one to the next. It’s like genes are remixed and basically if even if one generation has a particular set of views, they can pass that on culturally to like the next generation but they can’t pass it on extremely faithfully cause we just see constant cultural evolution as well as biological evolution. Whereas it seems like we’re getting closer to the point where, so I guess like writing helped for example to like have views persist over time and we’re getting closer over time to the point where you could have like AI agents that just reproduce themselves exactly every generation to the next or reproduce their values exactly. And this is kind of a new thing that creates the potential for lock-in a way that simply wasn’t possible given the reproductive technology that we had in the past.
Robert Wiblin: And then we’ve also got another reason to think that it’s particularly useful to act earlier because population is rising and on this model where the future gets very big, population continues to rise and it seems like your impact at any given point in time, might be like one over the number of people who exist at that time. So for example, let’s say in the past there was some kind of bottleneck where there was only 10,000 humans around. It seems like you might’ve had like much better opportunities to influence the trajectory of civilization at that point as one person then you can now as kind of one person out of 8 billion of them. Cause it’s just like your influence is very much watered down on average. There’s like all of these reasons to think that over time importance should be expected to go down quite a bit, which gives like a prior say of this being the most important century of one in a thousand perhaps.
Robert Wiblin: And then when we look around and see it actually does seem to be really important. We just like actually update to it being like 10% likely or something like that.
Will MacAskill: Okay, great. Yeah. So this is definitely why for some of these at least, I prefer to characterize these as outside view arguments rather than priors. So the idea that like, “Oh we’ve woken up and the population is seven and a half billion. It’s not a hundred trillion. That should be an update or something”. Similarly, I’m very early on or somewhat early on, but not super early or something, that’s an update too. And yeah, then quantitatively, in a footnote, I say that these do actually move my credence to, I say somewhere in the like 0.1 to 1% interval.
Robert Wiblin: You’re like, you’ve seen all of these things that I’ve just said and that’s what’s brought you up to 0.1 to 1%.
Will MacAskill: That’s right, yeah. So there’s one thing that you said, which is more like inside view argument and again, you know, it’s not like there’s a clean distinction there I think, which is, “Oh we’re coming up to these technologies where we can get kind of even better lock-in. And you were saying, “Oh well it seems like our importance has been increasing over time” and that kind of relates to the other argument I make, which is I think overall actually if again we’re defining influentialness as like who do you want to pass resources to? I would think, I have changed my view a bit on this, but my view still would be that basically at any time of the past I would rather pass it to now, and if that’s the case, then probably I should pass it still further into the future. And so while it is the case that we’re getting better technology that might give us the power to lock things in over time, which is a reason, well that’s a reason why it kind of influence is increasing over time.
Will MacAskill: But then insofar as there’s an argument like, “Well you want to be before any lock-in events”. That’s a reason for earlier. But we’re also getting better knowledge over time, better understanding and that reason pushes later. And so at that point, well you could either try and have like an a priori argument that’s like a battle between those two or just then look at well over time, how have these two factors been playing out? And it seems to me that over time it’s been the case that we would actually want to say influence has been increasing, and so perhaps we should expect it to keep going like that into the future. Like I’ve always been wanting to pass resources into the future, mainly because the knowledge and understanding benefits have been so great that they’ve outweighed these other things.
Robert Wiblin: So just to recap that. You’re like, “Yes Rob, all the things you said. But on the other hand there’s this other factor which pushes in favor of doing things later, which is that we become wiser over time and therefore more capable of figuring out what actions to take to influence the longterm”. And then if we just like look historically, like which of these things seems to be more important? It seems like in the past people would have done better to just save the money and then like give it to us today to try to have an impact than to try to have influence in their own time. And so shouldn’t that suggest that maybe the same is true of us today and that we should like leave money to people in 50 or a hundred years and then they can decide whether to pass it on to the next generation as well?
Will MacAskill: Yeah.
Robert Wiblin: Okay. So I think that’s kind of compelling. On the other hand, I guess some people did push back and say, “No actually I think people in the past should have done direct actions”. So, for example, people during the early stage of the cold war, should have been working on nuclear security and like bioweapon prevention and things like that. I think you’re inclined to say that people in the past couldn’t have anticipated what things would have been useful and then other people in the comments section here were saying, “Oh no actually like some of them did”.
Will MacAskill: Yeah, I think there was one thing that I made a mistake about, which is I think I didn’t appreciate that concerns about biorisk and AI, I think, were accessible in the relevant sense in like the fifties. If you had, again, it’s like what exactly is the counterfactual you’re asking about, but like given a kind of wide search. But it still would be the case that, well if on the standard views, the risks from AI and bio are way larger than risks from nuclear war or from totalitarianism, let’s say. And so if it was like, “Well in the fifties I would’ve been focused on kind of nuclear security”, and you also believe though that it’s… I think if we go with Toby’s numbers like a hundred times more likely that existential risk comes from AI than nuclear war, then it seems hard to believe you wouldn’t prefer to have the money sent in the future.
Will MacAskill: And you might also then over time, perhaps in the 80s, you would have started focusing on nanotech or something, which people now don’t think of as… you might well have also engaged in a number of other activities that now seem wasteful from the perspective of today. So I think I still endorse the view of pushing resources into the future. The biggest caveat actually I’d have is about the rise of fascism and Stalinism as that the thing to push on. This is like a kind of totally different worldview, which is just like, you’ve got a battle between different ideologies over time, and even though you might not think that a particular ideology will last forever, well, if it lasts as long until you get like some eternal lock-in event, then it lasts forever. So like maybe a different conversation we can have, but I kind of think the rise of fascism and Stalinism and was a bigger deal in the 20th century than the invention of nuclear weapons.
Robert Wiblin: Interesting, okay.
Will MacAskill: Sorry, that’s a bit of a digression. So I updated that we would have had more knowledge in the past and I agree that the best things to do could have been building a longtermism movement and so on. But to be clear, I’m counting that as investment activities. But it does seem to me the case that it would have still been better to pass resources into the future. I’d be happy to have my mind changed on that though.
Robert Wiblin: Maybe a different angle on a kind of similar concern to ones that’ve been raised, I’ll just read this quote from Will Kiely in the comments section: “The hingiest century of the future must occur before civilization goes extinct. Therefore, one’s prior that the current century is the hingiest century of the future must be at least as high as one’s credence that civilization will go extinct in the current century, and I think that this is already significantly greater than one in 100,000 or something like that”. What do you make of that argument? Does it seem like if you’re going to take the prior that you’re taking or the view that you’re taking that one has to kind of deny just that extinction is at all plausible just cause it’s seems so improbable for this century to be so important?
Will MacAskill: Yeah, so I will acknowledge that the issue of factoring in extinction with my priors, I don’t feel super comfortable about. The thing that I’ve been suggesting as a prior ,and again I wish I was more explicit on is… it’s kind of two steps. One is a function from size of a population to chance of you being the most influential person. And then secondly, for the purposes of action, what we care about is the world in which we’re in a really big universe if we’re successful in our actions to reduce extinction and so on. And if without that second thing we’re just looking at ‘are you the hingiest person ever?’ Well then overwhelmingly likely you’re going to be the hingiest person ever in some world in which we go extinct quite soon. But those aren’t the worlds we’re interested in.
Will MacAskill: So I’ve been wanting to just focus on the worlds that are most important from a longtermist perspective where it really is the case that we’re able to bring about a very long future. And so the person who thinks we’re at the hinge of history would have to say that, “Yeah, in the most action relevant world, if we are successful, we are the most influential person out of a trillion trillion trillion or whatever.” And so yes, the fact that we may well be the most influential person in some small future where we go extinct pretty soon, that’s way more likely to be true but not that interesting, so not what I’m going to focus on.
Robert Wiblin: A different line of criticism in the comments I think led by Greg Lewis was that yes this might all be right, but if it’s the case that kind of right now, there’s only a 0.1 to 1% probability of this being the most important century, we’re kind of never going to get above that in the future. We just live in a world where there’s going to be really important moments, but because we are so skeptical to start with that we’re ever going to live through one, the most confident we’re ever going to get is about 1% and so because we don’t want people to always ignore whatever red flags they see that this is a particularly important time we should be willing to act on a tentative hypothesis that this is a particularly important time if we see evidence as strong as what we see now rather than just always punt it to the future cause that guarantees that people will never act. They’ll just save a whole bunch of resources and then at some point they will go extinct.
Will MacAskill: Yeah. So I definitely agree that we shouldn’t assume that you need to have high credence in this to actually act. Instead, it’s just the action relevant question is, “What’s my credence in this? How high do I think it could be in 20 years time or 50 years time?”. So I agree with that and I think that’s important. The thing that he said in the comment that I didn’t agree with was that that could justify, even if you’re at 1%, a kind of monomaniacal focus on the present–
Robert Wiblin: It seems like you might want a mixed strategy.
Will MacAskill: Yeah, and this is how, in this kind of sum from the comments, they definitely get to a point where you’ve made the qualitative considerations and now there’s just some quantitative argument, which is, “Okay, well we get diminishing returns from spending now. There’s some chance that, well this is the most confident we’re going to get, especially the most confident we’re going to get given sufficient time away from some important event. But we shouldn’t be very confident”. Then you need to start having some actual quantitative model in order to be able to say–
Robert Wiblin: How much to spend and how much to save?
Will MacAskill: Yeah, but it seems still quite unlikely to me that we’d want to have monomaniacal focus now because well if you think it’s like, “Oh well I’m now at this really confident 1% chance”, surely you must think there’s a good chance we’ll think that again in 50 years or a hundred years.
Robert Wiblin: Yeah. A whole line of argument for delaying your impact or passing resources to the future that we haven’t talked about yet is that you earn the real interest rate. So even if you think it’s fairly likely that this is the most important century, if you think that the next century is going to be somewhat less important, then you could have 10 times as many resources if you just put it in the stock market and then took it out.
Will MacAskill: So this is why this notion of influentialness is only one part of the consideration about giving now versus later.
Robert Wiblin: You’ve also got this trading of resources between time and what’s the exchange rate basically whereas it’s like the longer you leave it potentially, the more resources you have to spend from having saved it, and we’ll have another episode with Philip Trammell about this where we discuss this in forensic detail for many hours.
Will MacAskill: Can’t wait to listen to it.
Robert Wiblin: We’ll pass over it for now, I think it’s probably going come out after this episode. That’s kind of a more natural ordering.
Robert Wiblin: Another really interesting comment and I wasn’t sure exactly where it was coming from, but I thought it raised some really important points was Paul Christiano writing about why he thinks that there’s quite a high probability that we’ll see a relatively sudden increase in economic growth in the next century. And I guess maybe the connection there is that, well if you’re going to have some sudden phase shift in how quickly things are growing then that seems like an unusually important time in a common sense way, and it also might make the current century hingey as well. Do you want to just explain that and whether you found it convincing.
Will MacAskill: Great, yeah. Economic growth now is way higher than it was historically. So during the hunter-gatherer era, growth was very low indeed. Then, after the neolithic revolution, well actually several thousand years after the invention of agriculture, but around about when the first cities started to form, growth increased quite rapidly in fact. And if you look on a semi-logarithmic graph at least of population, because we’re in a Malthusian condition at that point, so increased growth turns into greater population. It looks quite clear like it’s just a change in the gradient of the graph. So we’ve moved from one rate to another rate and then that happened again in the industrial revolution. Again, even just looking at population growth, it changes from what by present day standards is a very low rate to a much higher rate.
Will MacAskill: And so world economic product grows now at a few percent per year, and then there’s various ways you can interpret what’s happened there. So Robin Hanson has argued that we can see this as a series of new growth modes. So there’s an exponential prior to the neolithic period that transitions into a new exponential with a higher growth rate. Paul himself has argued it’s better understood as just a hyperbolic growth over time. So it’s not that there’s any special modes, it’s just that we keep accelerating and accelerating and the higher the level of growth, the faster the growth rate. Paul, in his comment, does not lean on the idea of hyperbolic growth, but he says that even if you think that it’s just been two growth shifts, well, we should think it’s pretty likely that it will happen again. And the reason he suggests that is because if you look at how long in terms of economic doublings did it take to go from one growth mode to another, well assuming it was that length of time measured in economic doublings again, it would actually be quite likely for it to occur in the next 80 years or a hundred years.
Will MacAskill: And then the question is, well why is that very important? Well, if we were to make extremely rapid technological progress, there’s just more stuff that can happen during that time. So rather than thinking about chronological time, perhaps we should think about kind of economic time where far more output will be produced over the coming century than all previous centuries. Another more intuitive way of saying this perhaps is just, well one thing we’re just doing over time is drawing from this urn of technologies. Maybe we get one that destroys us all. Maybe we get one that locks things in forever and we’re just drawing a lot faster if we hit another growth mode. And there’s lots to say on this. One is I think is the underlying idea of shifts in growth modes. I think it’s extremely important and in the wider world, is extremely neglected or underappreciated. And so the idea that there could be another one, it’s only happened twice before, is totally on the table.
Robert Wiblin: In addition it seems like we have specific theories about what those technologies might be that seem pretty plausible.
Will MacAskill: Yeah.
Robert Wiblin: Like AI or some other thing?
Will MacAskill: Yeah, exactly. I mean the two most obvious ones would be AI and human genetic enhancement. So there’s lots to say on this. One is just, well the move from saying it’s a fast growth mode to hinginess is quite a big leap itself or most influentialness, where if you were at the neolithic revolution, it seems like, well you just wouldn’t have had the knowledge or understanding to have a big influence there from a longtermist perspective or it could have been actively bad if you had done. You may have invented a new religion or something.
Will MacAskill: The second shift in growth mode to the industrial revolution: there it’s kind of unclear. It’s hard to know what to say. My read would be that again, people would have a very poor understanding of what the best thing was to focus on there. And so at least from the historical case, it’s not a strong argument for thinking, well, “Oh well then this time we will have enough knowledge to ensure that things go well”. There’s a second consideration, which is that if you’re a longtermist and other people aren’t, that can be an argument for wanting to be in a period of time with low economic and technological growth or change. So here’s the model where if things are changing a lot, then you just can’t really make longterm plans very well because you don’t know what these new technologies are going to be and they change the landscape and it means that things come to naught.
Will MacAskill: Whereas if we’re in this kind of stagnant period, then there are some things which have this really long term payoff and there’s some things that have short term payoff and all the people who discount the future with a pure rate of time preference, they go for all the short term stuff. But you’ve got these really cheap long term plans for influence that you can get for a bargain price. But in the case where things are changing all the time, perhaps you just don’t have access to those long term plans.
Robert Wiblin: Well it’s harder to implement because the world’s changing so fast, so you’re worried about expropriation or legal changes or war or all these other things.
Will MacAskill: That’s right. Let’s say VHS versus Betamax is happening and I’m like, “Oh, I really want Betamax to be the standard rather than VHS because it’s the better format and you get this lock-in. If there’s just rapid technological progress when that happens, well it just doesn’t really matter very much because you’ve got CDs or DVDs a few years later. If technological progress is really slow and it’s like thousands of years of people using the wrong movie–
Robert Wiblin: It’s a moral tragedy!
Will MacAskill: Exactly. But perhaps that could happen if you’ve got just the sequence of lock-in events then those will last for longer in periods of less rapid technological change. So it’s non-obvious to me which direction the tech change goes in.
Robert Wiblin: In terms of creating importance.
Will MacAskill: In terms of influence, yeah. In terms of like when do you as a longtermist most want your resources to be. And then, in general, I think I would love to see more work on this idea of growth modes over time and there is work that’s starting to happen on this where there’s a bunch of stuff where one is just, what’s the data like that’s making us think there are these kind of growth modes changes. Ben Garfinkel has been looking into this and the conclusion’s that the data’s really bad. Like basically made up. I can’t independently verify that, but it’s not very surprising.
Robert Wiblin: You’re saying the historical economic growth rates that Robin Hanson or Paul Christiano are using to make this argument is mostly just invented based on looking at how life was at different points in time and then kind of interpolating between that.
Will MacAskill: Yeah. So the move from the hunter-gatherer era to the farming era seems like “Wow, a rapid increase in population growth rates, but we actually just have no idea what the populations were like at the time. At least that’s the kind of argument I’ve heard. We’ve just got very poor quality evidence.
Robert Wiblin: It seems even just getting to the right order of magnitude might show that it was an increase in the growth rate, cause we’re talking here about thousands of years in the first period and hundreds in the next.
Will MacAskill: Yeah, I think I don’t have a good sense of just quite how unreliable the data is. One thing that is important is the second question, which is what sort of curve you fit to that, where Paul’s view is this is a hyperbolic curve, so just whatever the level of growth you’re at, the higher the level, the higher the growth rate. I’m not a big fan of that. One reason is I think when you look at better data it looks like at any period more like an exponential. And then a second is just that argument would have predicted that we would have had infinite growth by 1960.
Robert Wiblin: Did we?
Will MacAskill: Well, not to my knowledge. Maybe we did and it’s just like–
Robert Wiblin: Now we’re in the simulation–
Will MacAskill: Some corner of the world. Paul’s obviously aware of that as an argument against.
Robert Wiblin: Smart dude.
Will MacAskill: Very smart dude and just thinks, well this is just an unusual period of stagnation, and in the past it’s not literally following a perfect exponential. There’s periods of higher growth and stagnation, and so on. Whereas I’m more inclined to say that what happened was we invented this thing. Well, it’s not even like an invention. We developed a culture of innovation in industry. We’re getting this huge wave of benefits from that. That, as with basically everything, takes the form of an S curve. And so we’re actually kind of slowing in our technological progress. Maybe there will be some new big thing. Maybe that’s AI, maybe that’s something else. But it’s not an exogenous event, but it’s not a mere continuation of the existing trend.
Robert Wiblin: Yeah. Interesting. Okay. So on this model we had the scientific revolution and then once we’d developed that method of learning things, we just had so much low hanging fruit of amazing things we could invent and discover and now we’re kind of running out the things that it’s easy for humans to learn and that’s causing us to just level off a little bit.
Will MacAskill: Yeah.
Robert Wiblin: I mean a lot of people have raised this spectre actually.
Will MacAskill: Absolutely. So Robert Gordon’s, “The Rise and Fall of American Growth” has been quite influential on me. And on the industrial revolution, I’ve read a number of books now on what were the causes of it? I think something really messy like culture is the most likely answer, at least as for why did it occur then?
Robert Wiblin: Not the ability to transmit information at the time? Like writing or journal systems. I guess maybe you’re going to class it as culture.
Will MacAskill: Well we just had that for quite long time period before, unless you think of–
Robert Wiblin: Not cheaply, anyway, sorry, I shouldn’t interrupt you.
Will MacAskill: Yeah, I mean look, there’s an issue, which is, there’s so many possible explanations. It’s also probably something like, “Oh, why did the fire start? Was it because of the kindling or was it because of the match? Or was it the oxygen?”. It’s like, well, it’s all of these things.
Robert Wiblin: It’s easy to explain the industrial revolution. We’ve got 12 explanations.
Will MacAskill: Exactly, we’ve got too many. But there is a striking thing which is that we got the scientific revolution, the enlightenment and the industrial revolution all at about the same time. And they don’t seem to be particularly important for each other. Like, we’d already entered much more rapid growth before we started relying heavily on steam and coal. The early inventions were like the Jacquard loom, and they’re just things that are like, “Oh I put this bunch of sticks together in this way and now I can weave 10 times faster than I did before” and now the economy’s 0.1% larger. There’s just incredibly low hanging fruit when it came to innovation, and that’s definitely not true now. But that does relate to, Paul’s claim with respect to how likely we should think there’s a new growth mode. Where again, I want to emphasize, the main thing is I agree with him that it’s significant, but I’m much less inclined to agree with the idea that well, there’s been a certain amount of economic progress and we should measure time in economic time, because if you think, well, the underlying trigger for say the agricultural revolution was climatic change. Well that’s just got nothing to do with the level of growth in the past. And then if you think the industrial revolution has got… Its timing was dependent on culture, institutions and so on. That’s not endogenous in the same way as well. Like the rate of cultural change over time is much slower and not determined by rates of technological change.
Robert Wiblin: Yeah. Not so much directly. I was kind of surprised by how skeptical you were, assuming that we don’t have extinction or some massive catastrophe, that we would see a big speed up. I guess looking back at history, it seems it can’t be the case that through most of history we’ve had economic growth anywhere near the level that we’ve had now. Clearly this is a faster rate. So there was at least one step change. I guess possibly multiple ones and I suppose if we’re willing to go back further, we could be, well initially all was void and rocks and then we had life and then we had eukaryotes and we had plants and then we got animals and then we had brains. So from another point of view it’s like we see this technological advancement for evolution that’s increasing. What’s the measure of the economy here? I suppose it’s the biomass of all the life on Earth or something like that. Obviously that’s very gradual and then humans seem to be speeding that up a hell of a lot. So I guess my default view would be that we will see an increase in the rate of growth again at some point. I suppose maybe Paul’s making a stronger claim that would happen in the next century.
Will MacAskill: He said certainly more than 10% in the next century, and I would disagree with the certainly, but I would put it at something like 10%.
Robert Wiblin: So maybe it’s not very much of a disagreement.
Will MacAskill: Well, yeah, especially when it’s something where, as with many of these things, the numbers feel quite made up.
Robert Wiblin: But it feels so different. Alright, well we’ll stick up a link to Robin Hanson’s paper on this and maybe a few other articles on this question of whether we should expect growth to speed up or stagnate or slow down. Okay. So we spent a bunch of time canvassing various objections that you got in the comments on that post. Where do we stand now? Do you feel more or less confident about your original conclusions, having seen what people had to say about it? and I guess it seems like this is a very pressing question for us to figure out. Like, are you right about this or not, or was the original post good reasoning cause it was very decision relevant and so we need to have some agenda for resolving that. I feel at 80,000 Hours we want to know.
Will MacAskill: I agree and I think that it’s a question of whether we should be just trying to act now versus passing resources into the future. I guess that maybe taking that latter idea far more seriously is maybe the most important update I’ve made over the last year. I think at this point it needs to start getting quantitative because we all agree that we’re at a really unusual time, as in, whatever low prior you want or something, we should update away from that because of the reasons I’ve said of being early on in time, single planets, high economic growth rate and so on. I expressed some skepticism that actually being in a period of technological progress is bad if you’re a longtermist. But when you’re starting from such a low prior, if it’s like well 10% that it’s good, then that pushes it up a lot.
Will MacAskill: But then there’s the question of well how much do you update? Toby’s proposal seemed much too extreme to me cause it makes it, for example, overwhelmingly likely it was in the past. And then secondly what we care about is our posteriors rather than priors, although I think it’s kinda good to keep this posterior that doesn’t then also update on say the Bostrom-Yudkowsky arguments because that at least gives us a sense of like, well what’s the burden of proof for thinking there’s some specific lock-in event coming. And there again we want to get just a little bit more quantitative on what’s the right distribution we should have over different models for this, and when you put that all together, what sort of number do you end up with? And then the second part of it is with respect to should we actually be sending money into the future or resources into the future versus now. Again, we’re just going to want to get a lot more quantitative and Phil is doing this where there’s a variety of considerations.
Will MacAskill: So one is diminishing marginal returns to money at any time. A second is expectation of how many more longtermists are going to be in the future or not. A third is what’s the rate of return over time. The fourth is how much more are we learning and how good or bad are the best opportunities going to be. And so once you put that all together, you get some portfolio where it’s very likely that you’ll be spending some amount every year and saving some amount every year and then the precise quantities are up for debate.
Robert Wiblin: Empirical questions, yeah.
Will MacAskill: Yeah, well kind of empirical questions. More empirical than we’ve been so far. And then finally it’s like how does that map onto what we’re currently doing? Where, at the moment, we actually are saving a lot. Open Phil is not even spending what I would expect the return on its assets to be within any individual life insofar as people donate as they go, rather than save money and donate. That’s effectively investment. Lots of direct work is actually meta at the same time.
Robert Wiblin: So that counts as a form of saving cause you’re getting more people to agree and be willing to do the kind of stuff that you would get out of the future.
Will MacAskill: Yeah. So it could also be the case that we end up having been doing the right thing by accident. Yeah, I guess I do sometimes feel that there is a feeling that goes round, especially kind of outside the very core of EA and maybe less so now, but over the last couple of years, which was among longtermists and people who take AI seriously, it’s like, well there’s the short timelines, we should have short timelines for AI. We should put extra weight on short timelines. And so we should just be really going all out on this.
Robert Wiblin: That is to try to shift pivotal events that might happen in the next 10 years or something like that.
Will MacAskill: Yeah, exactly. Whereas I think that there should be a diversity of approaches. I think Paul should be doing that. Yeah, I’m really glad he is. But, for the EA movement, as a whole, I think it would be very bad if we pinned everything to this mast of short term AI timelines, because I think that’s probably going to be quite unlikely. Whereas there are actions we could be taking that could have, could be ensuring that EA has an impact over long timescales and in particular, we should potentially be thinking about centuries worth of impact. Where if you look at other social movements, most, I would say, had most of their impact more than a century after the first ideas or progenitors. And that’s a perspective and a form of thinking that I think a lot of people have not been paying much attention to. And so it would mean that, at the moment, ensuring that EA as a product of ideas is just right becomes very, very important and, in general, perhaps much more important than faster growth kind of right now.
Robert Wiblin: Yeah. So I guess to give my overall view, having read the post and the comments, one reservation I had about having this conversation was that it seems like we’re in very much a state of flux where people’s views on this are shifting quite a lot because we’re uncovering new ideas and new considerations and it seems quite possible that you could persuade the people you were talking with that you were right and that they might persuade you that the original blog post was misconceived. There’s the possibility of maybe getting people to take some of these ideas too seriously when it will turn out that in a few months we’ll find really strong rebuttals. I guess I found the responses in the comments relatively persuasive. So I think that the range of saying the probability of this century being like especially important of being like 0.1 to 1% seems too low to me.
Robert Wiblin: I suspect that we’ll end up settling on a prior that is more generous to the hypothesis of this century is especially important than what you were using. In the end, I don’t think it can be quite as generous as Toby’s one. It might be somewhere in between. So we’ll end up thinking maybe it’s more like 10% or something like that. But then I think, well the proposition that we should be doing more to influence the long term and to invest in not just having an impact in the short-run and passing resources onto future generations, that I think is much more plausible because there’s other considerations in its favor above and beyond just thinking that this century isn’t especially important. Like you can give way more resources to future centuries and that we’re getting much more informed about how to have an impact. I guess if people take away, “Oh well I should be thinking more long term about having an impact”, then that probably seems good even if the reasons are somewhat different.
Will MacAskill: Yeah. Another way of putting it a little bit is that there’s been a lot of thought of, well short AI timelines is an unusually high impact world. But very long timelines, or in fact no timeline if it’s just in the sense that there’s not a specific moment when we get AGI — those are also very high impact worlds because it means we have tons of time to grow and we can exploit the fact that we have zero pure rate of time preference whereas other people are really hasty. So you just can take longterm plans and effectively trade with people to have more leverage in the future.
Robert Wiblin: Yeah. So this a nice preview of the Philip Trammell episode, which we’re now definitely going to have to release after this episode because this is such a nice preview of where that conversation goes.
The risk of human extinction in the next hundred years [2:15:20]
Robert Wiblin: So one question I’ve seen on the forum, and I guess you did an AMA recently, a question that was a great popular demand among listeners was what now do you think is kind of the risk of human extinction in the next hundred years or I suppose some global catastrophic risk that’s very spectacular.
Will MacAskill: Yeah, so this is definitely been the thing by far and away that most people have come up to me and asked me about after the AMA.
Robert Wiblin: No-one wants to know my probability of extinction, Will.
Will MacAskill: Well, I think people should maybe say more. I seem to remember yours was very high.
Robert Wiblin: It used to be. I think it’s gone down actually, if moderately.
Will MacAskill: Maybe I’ll find out. So yeah, I got asked if there are ways in which I’ve changed my views. One of the things I said was that my credence in existential risk in the 21st century has gone down from 20% to 1% and a lot of people have come to me and been like, “What’s going on?!–”
Robert Wiblin: You’re mad, Will!
Will MacAskill: Exactly. So yeah, happy to talk about that. I mean normally my inferences wouldn’t jump around in the same way, in such a large way, but this was a move from basically deferring to people, like picking someone, in particular, Toby who I respect extremely highly, learning what his views are and and then believing them.
Will MacAskill: Yeah. So instead the thought was just, well, I’m going to start writing this book on longtermism. I want to start to get engaged with the fundamentals and understand what actually are my views on these topics. And so rather than asking why did it move from 20% to 1%, the relevant question is why is it this number than any other? And so let’s just start with again, kind of prior setting for what should I expect it to be over the next eighty years. And I think that there are a few different lines of reasoning here. They all point in basically the same direction of kind of the 0.1% to 1% interval. One is just well, if what’s been the existential risk over the last century, or perhaps more accurately last 20 years and then assume the future will resemble the past; I wouldn’t want to go any higher than 0.1%. So again, Toby uses that figure for existential risk from nuclear war.
Will MacAskill: I guess he also says the same from climate change and environmental depletion. But again, it’s the same kind of the order of magnitude I’m talking about. On the details, I would actually put it a little bit lower than that, but that’s not that important for now. Second then, is just thinking in terms of the rational choice of the main actors. So what’s the willingness to pay from the perspective of the United States to reduce a single percentage point of human extinction whereby that just means the United States has three hundred million people. How much do they want to not die? So assume the United States don’t care about the future. They don’t care about people in other countries at all. Well, it’s still many trillions of dollars is the willingness to pay just to reduce one percentage point of existential risk. And so you’ve got to think that something’s gone wildly wrong, where people are making such incredibly irrational decisions. Whereas.. Oh Rob just can’t wait to respond here!
Robert Wiblin: You look at the US government and you’re like, “There’s just no way they could mess that up”.
Will MacAskill: Well, oh no, but here’s the thing. So then just following on this, what’s just the risk of death that people just tend to bear and it’s maybe 0.1%, 1%. People drive and so on that poses these risks. But not really higher. If I was in a world where existential risk was much higher than that, I would expect to just see technological catastrophes happening? But I almost never do.
Robert Wiblin: Yeah, so this is a slightly different argument.
Will MacAskill: Yeah. Sorry, actually this is different. It’s more like what’s the trend over time? Okay. Yeah. So what’s the total existential risk over time then there’s willingness to pay what people in general are willing to bear and then there’s a third thing which is related to the first two, which is just with respect to developments in new technology, do people tend to be appropriately risk neutral? Do they tend to be risk averse? Do they tend to be risk seeking. And it seems to me actually, relative to the rational choice model, or how I would expect people’s preferences to be, people are risk averse with respect to new technology. It’s very rarely cases where large numbers of people die because of new technology, for example. And that does apply to the US government and so on. I think there are cases–
Robert Wiblin: You might think it’s a little bit random. Sometimes we ban things that we shouldn’t. Sometimes we don’t ban things that we should.
Will MacAskill: It’s true that sometimes we don’t ban things we should perhaps. But in cases where they kill people or in cases where it’s a serious catastrophic risk it seems like we’re very systematically too cautious.
Robert Wiblin: Well I guess, I mean we always come back to nukes, but arguably the US should’ve done more to just prevent both themselves and others from ever scaling up nuclear weapons in a massive way.
Robert Wiblin: Maybe it’s there just aren’t that many other technologies that… I suppose the US did shut down its biological weapons program more or less cause they were worried about this and it was the USSR that kept it, but that was mostly due to bizarre internal bureaucratic incentives rather than because they thought it was a great idea.
Will MacAskill: Yeah and I should say I do think that, again, in terms of my estimates for existential risk over the century, I would put 90% of the risk coming from wartime or something precisely because people… If you tell me someone’s done something, a country’s done something incredibly stupid and kind of against their own interest or in some sense of global interest, it’s probably happened during a war period. Final argument for prior setting, which is then Metaculus which is a forecasting platform and these are non-experts, but they’re people who are engaging with it and they’ve been asked what’s the chance of a greater than 95% reduction in world population due to a catastrophe by the end of the century.
Will MacAskill: And I think the aggregate, well where they use the Metaculus algorithm which is better than any individual forecaster ends up in that ballpark, again, it’s 0.5% or something.
Robert Wiblin: Okay, hold on, so you’re saying a 1% chance of GCR that kills 95% of people or more?
Will MacAskill: Oh, well no. It was existential risk. So I actually think there’s quite a big gap in the probability space between 95% people dying and everyone dying.
Robert Wiblin: Massive actually, yeah.
Will MacAskill: Yeah, but they’re putting 0.5% at 95% so it’s going to be even lower.
Robert Wiblin: Interesting. Okay. So in a sense I think this is crazy. So from one point of view it’s common sense. It’s like, we’re not going to see this massive change. But then from another point of view it just seems massively overconfident to think there’s a 99% chance that we’ll get through what seems like an extremely chaotic era by historical standards where there are just so many things that seem like they could happen that they could be really transformative. Like you’re super confident.
Will MacAskill: Killing everybody! Seven and a half billion people! And everyone wants to survive! This is huge! It’s both super hard to kill everybody and there’s this huge optimization pressure for this thing not to happen. And we have many examples in the past of 50% of people dying in the bubonic plague. It doesn’t even register really from a long term historical perspective. And that’s in a period where we don’t have amazing technology that allows us to overcome that.
Robert Wiblin: So I think one thing I shifted on over the years is realizing just how hard it is to kill everyone. So people talk about climate change killing everyone and I’m like, no that’s not going to happen. It’s just so hard to kill the last 1% cause there’ll be in places that are fine with regards to climate change, even in the worst case scenario.
Will MacAskill: I think this for an asteroid impact as well.
Robert Wiblin: Yeah, it has to be really quite big.
Will MacAskill: I mean the Chicxulub impact… Many mammal species still lived and we have hundred times the biomass of any large animal in a wide diversity of areas. We have technology. We can grow food without sunlight.
Robert Wiblin: And then there’s also diseases. It’s just, it’s so hard for a disease to spread everywhere all at once and no one have resistance to that. So it’s a lot easier I think to have a disaster, like a war that kills 95% of people or is very devastating. But then some people through good luck or just being in the right place at the right time, they manage to survive through it.
Robert Wiblin: So really, kind of by far the most plausible way I see that you could kill everyone is that you create a new agent that is seeking out to kill everyone and foil all their plans to survive. So it probably has to be AI or something like that in my mind. I guess now perhaps I just have quite different views on AI at this point where I think there’s definitely more than a 1% chance that we could create an AI that’s misaligned enough to either kill all humans or kind of take us out of the decision loop in a way that we don’t like.
Will MacAskill: Okay, so it sounds like supposing you put aside AI.
Robert Wiblin: Okay. So now I agree with you. Or your view now seems very plausible.
Will MacAskill: Okay, cool. I think I also would have put the risk from AI well, I mean I guess insofar as I was deferring to Toby and what it says in his book, which again the majority of credence is from AI, but still significant credence from biorisk and so on.
Robert Wiblin: Okay. Interesting. Yeah, I suppose if experts in that area thought that it was much more likely then I would defer to them, but I guess we haven’t seen examples of diseases that were able to penetrate globally. I mean especially since we have bunkers where people will wall themselves off and we have ships where they might not get to in the Antarctic and so on.
Will MacAskill: I mean there’s a really big question, which I feel very unsure on which is supposing that something kills 99% of people, will we recover? I see no reason for thinking no and some positive reasons for thinking yes.
Robert Wiblin: Yeah. But even if we don’t recover, there’s a good chance we’ll still be around in a hundred years. There’d still be a first generation potentially.
Will MacAskill: Yeah. So there’s also a question of when you say existential risk given 80 years, what does that mean exactly?
Robert Wiblin: Yeah, interesting. I suppose in the past I feel a bit bad because on the show we’ve suggested that just a standard nuclear exchange between the US and USSR might well cause human extinction, whereas I actually think that that’s very unlikely.
Will MacAskill: Yeah. I mean nuclear winter I would say is even, given an all out exchange, I think full nuclear winter is less likely than not to happen.
Robert Wiblin: Not to happen at all? I mean surely you’ll get some change in temperature.
Will MacAskill: You’ll get some change in temperature but nuclear winter’s normally like five degree or greater kind of change. But then secondly, I mean there are many people who’ve studied this who say… I haven’t found a single expert who’s said “Yes, this will kill everybody”. Many people have said it will not, often reacting to Carl Sagan who would say that it would.
Robert Wiblin: That was thirty years ago so it’s very different information that they had then. Yeah, Carl Sagan was also an activist on this point. I mean, I guess it’s just like the people in New Zealand, they’re going to survive. It seems like there’s very strong indications that people in New Zealand would just continue fishing–
Will MacAskill: Yeah, if you’re in a coastal area in the Southern hemisphere temperatures drop by like one or two degrees at most. So that’s preindustrial kind of level. It’s just not that big of an event and so it’s really hard to see how that ends up killing absolutely everyone.
Robert Wiblin: I suppose actually, well I guess given the way that it would happen is that we’re totally wrong about our atmospheric model somehow that we’ve misunderstood.
Will MacAskill: Yeah. There has to be some x factor.
Robert Wiblin: Yeah. I would like to do a full episode on this sometime, perhaps with someone’s who’s really informed on the nuclear winter stuff. It seems important.
Will MacAskill: Yeah. I think there’s still working happening at Open Phil on this.
Robert Wiblin: Ah, at Open Phil, okay. Yeah. It seems like we should also do an episode on this question of would we recover if 99% of people died? I’d love for someone to really spend a PhD on that. I don’t know whether that’s happened.
Will MacAskill: Yeah, me too. I mean cause one argument I’ve heard is just well the disease burden that we carry around is way higher. So HIV for example, it’s just an unprecedented sort of disease. Certainly one that did not exist in the relevantly equivalent kind of hunter-gatherer era. Or like the agrarian era. And perhaps at least that’s something that’s like a little bit different.
Robert Wiblin: I mean one thing is if population density goes down, that greatly reduces disease transmission and would actually reduce… Although I suppose inasmuch as tons of people already harbor lots of different diseases and that maybe now they can’t treat cause they don’t have antibiotic factories or something like that. Anyway, there’s probably a lot of considerations here that someone needs to map out.
Will MacAskill: Yeah, exactly. So then it’s the case that AI sounds like the main thing. Yeah. I think in general when I had this thought of, “Okay, well I want to get to grips with the base arguments and views here”. I did have the thought of like, “Okay, well I’m going to write this book, so where are all these reports that are giving estimates for existential risk in this century from various causes.
Robert Wiblin: I’ve got some comments on blogs you can read, Will.
Will MacAskill: Yeah and I actually think that’s kind of shocking. Like just think about the allocation of resources, like the number of people that are going into direct work here versus people who are making a case.
Robert Wiblin: Well, so there’s ‘Superintelligence’. There’s this new book from Stuart Russell. There’s a bunch of serious books I guess about AI.
Will MacAskill: Well, no, I think in the case of AI, so Stuart Russell, I really respect, the book, ‘Human Compatible’ I was disappointed in. It’s just trying to do something that’s different than I was hoping it would do. It’s another popular book, at least with respect to the key argument.
Robert Wiblin: Okay.
Will MacAskill: And then ‘Superintelligence’, of course not. It’s a thorough book. But it spends four pages on deep learning. It’s kind of unfortunately, it was kind of made obsolete, I think the year it was published because it was published exactly at the point of the boom in deep learning.
Robert Wiblin: I see. So you’re saying it didn’t focus on that because at the time we didn’t realize that that was about to take off as a great method of doing AI.
Will MacAskill: Yeah.
Robert Wiblin: I see, and maybe just many of its arguments don’t apply very well to this new paradigm.
Will MacAskill: Exactly, yeah. That’s how it naturally seems to me and then perhaps that’s wrong. Perhaps people are like, “No, actually it applies really well and this can be responded to, but I feel like that case needs to be made.
Robert Wiblin: Yeah. So we already have an interview with Ben Garfinkel where he lays out his skepticism. Maybe you could present your own version of that in a couple of minutes.
Will MacAskill: Yeah, so I think there’s just a whole number of ways in which when we kind of take the same arguments and look at current progress, deep learning, I feel like it doesn’t fit that well. So in the classic Bostrom paradigm, you’ve got some super powerful agent and then you give it this command and it takes the command literally and goes out and that’s doom.
Will MacAskill: Whereas in the case of creating a smart ML system, you start off with some reward function and then you start off with a dumb agent and then you make that progressively smarter over time. So just as an analogy, you’ve got a child and then you can slowly see how it’s performing. And you get to like monitor it and see how–
Robert Wiblin: Cross the river by feeling the stones.
Will MacAskill: Yeah, exactly. And you can turn it off when it’s not doing the thing we want. We’ll start again. Similarly, the idea that it takes the natural language command very literally. Well that’s again, like I feel doesn’t map on to very well to current deep learning where it’s like, “Yes, we can’t specify perhaps exactly what we want in this kind of precise way, but, ML’s actually being quite good at picking up fuzzy concepts like, “What is a cat?”, and it’s not perfect. Sometimes it says an avocado is a cat.”
Robert Wiblin: But humans aren’t either though, so maybe it’s just like it’s going to have a similar ability to interpret to what humans can.
Will MacAskill: Exactly. And it would be a very weird world if we got to AGI, but haven’t solved the problem of adversarial examples, I think.
Robert Wiblin: So I suppose it sounds like you’re very sympathetic to say the work that Paul Christiano and OpenAI are doing, but you actually expect them to succeed. You’re like, “Yep, they’re going to fix these engineering issues and that’s great”.
Will MacAskill: Yeah. But so do most people OpenAI as I understand it actually.
Robert Wiblin: They think this is just a solvable engineering challenge.
Will MacAskill: Yeah, absolutely. This is actually one of the things that’s happened as well with respect to kind of state of the arguments is that, I don’t know about most, but certainly very many people who are working on AI safety now do so for reasons that are quite different from the Bostrom-Yudkowsky arguments.
Will MacAskill: So Paul’s published on this and said he doesn’t think doom looks like a sudden explosion in a single AI system that takes over. Instead he thinks gradually just AI’s get more and more and more power and they’re just somewhat misaligned with human interests. And so in the end you kind of get what you can measure. And so in his doom scenario, this is just kind of continuous with the problem of capitalism.
Robert Wiblin: Yeah.
Will MacAskill: I kind of agree. There’s this general AI, what we could call a generalized alignment problem, which is just–
Robert Wiblin: Society as a whole is not acting in the interests of its participants.
Will MacAskill: Yeah, exactly.
Robert Wiblin: This is just a more extreme version of that. Maybe not even more extreme.
Will MacAskill: Yeah, exactly. It’s unclear. Especially given that we’ve gotten better at measuring stuff over time and optimizing towards targets and that’s been great. So Paul has a different take and he’s written it up a bit. It’s like a couple of blog posts. But again, if you’re coming in from, and perhaps they’re great arguments. Perhaps that’s a good reason for Paul to update. But again, what is a big claim? I think everyone would agree that this is an existential risk. I think we want more than a couple of blog posts from a single person and similarly MIRI as well who’re now worried about the problem of inner optimizers. The problem that even if you set a reward function, what you get doesn’t optimize. It doesn’t contain the reward function. It’s optimizing for its own set of goals in the same way as evolution has optimized you, but it’s not like you’re consciously going around trying to maximize the number of kids you have.
Will MacAskill: But again, that’s quite a different take on the problem. And so firstly, it feels kind of strange that there’s been this shift in arguments, but then secondly it’s certainly the case that, well if it’s the case that people don’t really generally believe the Bostrom arguments — I think it’s split,I have no conception of how common adherence to different arguments are — but certainly many of the most prominent people are no longer pushing the Bostrom arguments. Well then it’s like, well why should I be having these big updates on the basis of something for which a public case, like an in depth kind of hasn’t been made.
Robert Wiblin: I suppose you could, you could just think that you’re in a position to understand his arguments and evaluate them. That’s probably a little bit too overconfident given how hard it is to think about these things.
Will MacAskill: I mean, but then insofar as I have had kind of access to the inner workings and the arguments. Yeah. I’ve been like way less–
Robert Wiblin: Underwhelmed?
Will MacAskill: Yeah, I guess.
Robert Wiblin: I guess it sounds like you feel a little bit jaded after this. We had all of these arguments about this thing and now they’ve all gone. But now we have these new arguments for the same conclusion that are completely unrelated.
Will MacAskill: Yeah, and they’re not completely unrelated, but yeah, there’s something like–
Robert Wiblin: I was going to push back on that cause when you have something that’s as transformative as machine intelligence, it seems there might be lots of different ways that people could imagine that it could change the world and some of those ways will be right and some will be wrong. But it’s like it’s not surprising that people are like looking at this thing that seems like just intuitively like it could be a very big deal and like eventually we figure out like how it’s going to be important.
Will MacAskill: But the base rate of existential risk is just very low. So I mean I agree, AI’s, on normal use of the term, a huge deal and it could be a huge deal in lots of ways. But then there was one specific argument that I was placing a lot of weight on. If that argument fails–
Robert Wiblin: Then we need a new case, a new properly laid out case for how it’s going to be.
Will MacAskill: Otherwise it’s like, maybe it’s as important as electricity. That was huge. Or maybe as important as steel. That was so important. But like steel isn’t an existential risk.
Robert Wiblin: Yeah. What do you think are the odds that we don’t all die but something goes wrong somehow with the application of AI or some other technology that causes us to lose the value because we make some big philosophical mistake or some big mistake in implementation.
Will MacAskill: Yeah, I think we’re almost certainly not going to do the best thing. The vast majority of my expectation about the future is that relative to the best possible future we do something close to zero. But that’s cause I think the best possible future’s probably some very narrow target. Like I think the future will be good in the same way as today, we’ve got $250 trillion of wealth. Think if we were really trying to make the world good and everyone agreed just with that wealth we have, how much better could the world be? I don’t know, tens of times, hundreds of times, probably more. In the future, I think it’ll get more extreme. But then will it be the case that AI is that particular vector? I guess like yeah, somewhat plausible, like, yeah… .
Robert Wiblin: But it doesn’t stand out.
Will MacAskill: It doesn’t stand out. Like if people were saying, “Well, it’s going to be as big as like as big as the battle between fascism and liberalism or something. I’m kind of on board with that. But that’s not, again, people wouldn’t naturally say that’s like existential risk in the same way.
Robert Wiblin: Okay. So bottom line is that AI stands out a bit less for you now as a particularly pivotal technology.
Will MacAskill: Yeah, it still seems very important, but I’m much less convinced by this one particular argument that would really make it stand out from everything.
Robert Wiblin: So what other technologies or other considerations or trends kind of then stand out as potentially more important in shaping the future?
Will MacAskill: Yeah, well even if you think AI is probably going to be a set of narrow AI systems rather than AGI, and even if you think the alignment or control problem is probably going to be solved in some form, the argument for new growth mode as resulting from AI is… my general attitude as well is that this stuff’s hard. We’re probably wrong, et cetera. But it’s like pretty good with those caveats on board. And then when you look at the history of well what are the worst catastrophes ever? They fall into three main camps: pandemics, war and totalitarianism. Also, totalitarianism is, well, autocracy has been the default mode for almost everyone in history. And I get quite worried about that. So even if you don’t think that AI is going to take over, well it still could be some individual. And if it is a new growth mode, I do think that very significantly increases the chance of lock-in technology.
Robert Wiblin: Yeah, I am on board with that and we’re going to have more I think about totalitarianism.
Will MacAskill: Okay, terrific. Yeah. I’m not really sure why we haven’t discussed this more.
Robert Wiblin: I think AI has displaced it, because we’ve been like, “Well, this is going to happen sooner and we’ll preempt potential political problems”.
Will MacAskill: But even if you just think, the argument I would often hear is like, “Well a totalitarian government will never last forever”. It doesn’t need to last forever. It just needs to last till whenever there’s a lock-in period.
Robert Wiblin: Well also, I think it can last forever. Or it can last a very long time. It just needs to last long enough to get to the point where you have like self-replicating… Like some technologies that can lock it in place.
Will MacAskill: Yeah, that’s exactly my thought. And so when you’ve got that AI is one path, but yeah, genetic engineering, like we can already do this. We could clone humans if we wanted. So let’s say AGI is impossible. I think you can get lock-in from cloning. So the three things that determine your personality and values are genetics, environment and randomness. Well genetics, I can guarantee to be the same via cloning. Environment, well I just give intense moral education. Perhaps there are different forms of randomness. Well then I’ll just have a thousand clones. So if there was a dictator, they could have successors to them, the most loyal of the thousands of their clone children. That could persist certainly for much longer time periods. Obviously and ending aging as well could allow the time periods to be much longer.
Robert Wiblin: Yeah, the risk of interrupting your list here, it’s like, so this is kind of the argument I was making earlier about how we’re getting better at like replicating values and copying ourselves and I talked about the machine case, but also with humans we’re doing this as well.
Will MacAskill: Oh right, yeah.
Robert Wiblin: So this like seems to increase the likelihood that maybe this century is particularly pivotal, Will.
Will MacAskill: Well it does increase the likelihood.
Robert Wiblin: To 0.1%.
Will MacAskill: I said the outside view arguments move me to 0.1 to 1%.
Robert Wiblin: Cool, right. Sorry, carry on.
Will MacAskill: I’m not sure how much the inside view arguments bump me up. It’s still to less than 10%.
Robert Wiblin: Yeah, and a different mechanism setting aside cloning is, we’re very close to being able to select for particular personality characteristics for the next generation, and if you just have say the government mandate that we have to select for people who have particularly compliable personalities or are anti-rebellious or are very conformist, then it seems like if you had like a full generational shift where we’ve got like say one or two standard deviations in conformity, then we might just never have enough people who are interested in overthrowing the system perpetually, basically.
Will MacAskill: Interesting. Yeah. I hadn’t heard that particular argument. I also think we’ve been doing this already over time.
Robert Wiblin: Like yeah, domesticating ourselves is the argument.
Will MacAskill: We’ve been killing violent people.
Robert Wiblin: Yeah.
Will MacAskill: Not me. People in the past. Okay. But you’re asking kind of what does this mean? Well, yeah, two things. One is insofar as I’m like less convinced by the inside view arguments I’ve heard, it results in a kind of flatter distribution of concerns I guess. So AI is there, but yeah, genetic engineering, especially when I see totalitarianism as the kind of vector. Then I’m also just much more inclined to look at like what things were really bad in the past. So what could happen again. So I’m very concerned by war where just the track records over time, just most, or like a large proportion of history we’ve been at war between great powers. We have lived through 70 years of peace between great powers, no hot war at least. And that’s like quite unusual historically. It’s not super unusual.
Will MacAskill: If you look at just the graph of rate of death and war, there is no trend. We’ve had this period of 70 years. Maybe we’re in a new mode where war’s much less likely but you can not infer that at all from the data over time rather than it just being it’s very noisy because it’s driven by some very large events. In particular, some of the explanations you give for why there hasn’t been a war for the last 70 years are very contingent because one could be, well, luck, we didn’t just happen to not go to war between the US and the USSR or the second just because the US is too powerful. It’s just been a hegemon. So there’s no incentive cause everyone would lose if they went to war with the US. I expect that to change in the coming century.
Will MacAskill: And that really worries me for two reasons. One is insofar as I have this general view that just actually the world is like people are really safety conscious, quite timid with respect to new technology and so on. That gets much less, way less robust if it’s a war situation. One of the things I’ve found frustrating with the AI literature is that these stories of AI takeover are just like kind of silly. It’s like, “Oh well we’ll give it paperclips”, and everyone’s like, “Oh no, but that’s just a fable. That’s not what I really mean”, and I’m like, “Well tell me what you really mean”. But when I try and imagine like the strongest case where AI does take over, it’s like, okay, there’s this war between the US and China and we have the capacity to have AGI, but we’ve like regulated it and so on that we don’t often use it, but China’s losing. And so it’s like, okay, I’m just going to give control of my army, which is itself automated, to the AI because otherwise I’m totally going to lose. Like that’s the kind of worst case story I could tell that this is starting to not feel like a silly fable.
Robert Wiblin: I mean just to defend the AI people. I think the idea of integrating AI with the military and having that go badly has definitely been on the table. That’s something that people have discussed but maybe they haven’t mapped it out very well.
Will MacAskill: But yeah, again, I’m not wanting to claim that the arguments are bad. I’m claiming that the published arguments are bad.
Robert Wiblin: They haven’t been rigorously put.
Will MacAskill: Yeah, and like why is that? Why is it the case that all the arguments are telling me these fables.
Robert Wiblin: It’s a lot of work, Will. It’s not that nobody’s done it.
Will MacAskill: It’s just that relative to the amount of expenditure that we’re now putting into this area. I had no objection to this back in 2014 when it’s like Bostrom, Yudkowsky and like a dozen people.
Robert Wiblin: We don’t need 10 people to decide whether five people should think about this.
Will MacAskill: Exactly. I’ve just come from this speaking tour on college campuses. The idea of AI risk is quite normal now. It’s crazy that this is the case. The amount of change that’s happened.
Robert Wiblin: My parents used to think that I was a bit nuts. Now they are like totally on board.
Will MacAskill: Wow, okay. That’s incredible.
Robert Wiblin: But I don’t think that’s unusual. I’m talking about back in 2012, they were like, “Yeah, this is pretty weird”. But now it’s just very mainstream.
Will MacAskill: Yeah. But I think that poses a higher bar for how much effort we should put into making the case.
Robert Wiblin: Yeah.
Will MacAskill: Okay. So the question was what things…yeah, look at the history… War, I’m particularly concerned about that. Also if you’re asking me about like really bad scenarios for the future, so not just neutral like extinction or something, but actively bad. Again, it definitely comes from war scenarios. Then I also just have more concern about growth of ideology over time.
Robert Wiblin: Heading in the wrong direction?
Will MacAskill: Yeah, just that there are some fundamentalist religious groups that just have more kids and have solved the lock-in of values better over time and perhaps there’s no new growth mode or anything, but that just slowly takes over. And then again with better lock-in technology, that lasts for a long time. Yeah, and generally I guess one of the themes is that I worry about lock-in of a particular moral view that one would find in the world today rather than catastrophe, which is again, there’s just not like a historical track record.
Robert Wiblin: Well that’s one way that it could happen very quietly in a way that’s not evidently a disaster. It’s just that you have some like dominant ideology, I guess it could be EA or like our views could just be terribly wrong and become very mainstream and also persist for a long time, but they’re just like really off base.
Will MacAskill: Yeah. I mean, it could have also happened already, like maybe Western liberalism is a travesty and we believe it because it’s conducive to economic growth.
Robert Wiblin: Yeah.
Will MacAskill: And in fact, I kind of see that as the default future.
Robert Wiblin: Where we get philosophy wrong in some respect and that means that things are way worse than they could have been.
Will MacAskill: Yeah. A book that I’d encourage people to read is ‘Foragers, Farmers, and Fossil Fuels’ by Ian Morris who points out, and I’m just going to have to defer to him on this empirical claim being the case, that hunter-gatherers, almost all hunter-gatherer societies had the same social order, which was very egalitarian, small kind of bands of people. Quite permissive with respect to sexual relations. Very high rates of violence. Then we moved to the agricultural era and basically every society has the same social order, extremely hierarchical with an Emperor, or even God King. Very impermissive attitudes to sex. Considerably less violent, but still violent by today. And now we’re in this new form of society and suddenly we’ve this new set of moral views and it’s like quite worrying.
Robert Wiblin: Is culture just an economic superstructure?
Will MacAskill: Yeah. I mean there is a counterargument to that which is, when you look into the future, which is if things do go very quickly, then cultural selection and evolution is rather slow. So perhaps again it’s just super growth mode change and that just takes us to technological maturity. Well then it’s no longer a matter of cultural evolution. It’s just roll of the dice of who gets that power.
Robert Wiblin: Yeah, so I guess inasmuch we’re concerned about over centuries, it seems like climate change might be more important that if we like really trash the planet so that its carrying capacity goes down a lot over 200 or 300 years, then maybe that could actually have a bigger impact on what it seems like if you think that things are going to change dramatically in the next century anyway.
Will MacAskill: Yeah, also it might just prolong the period of the time of perils. If you think the time of perils is a thing and yeah, maybe climate change is just again it’s worse. We’re already kind of stagnating and it slows down growth even more. It’s plausible that could like last centuries even longer before we then get to some faster growth mode if there’s one.
Robert Wiblin: That helps to offset it.
Will MacAskill: Yeah, and then that means like centuries more living with nuclear weapons and biological weapons and so on.
Robert Wiblin: Oh, and probably like people being more dastardly with one another cause like there’s not a lot of economic growth around.
Will MacAskill: Yeah, absolutely, and a greater time period for cultural evolution to select for those views that promote fertility. So the Quiverfull is a religious group in the US that is fundamentalist Christian and promotes just having as many kids as possible.
Robert Wiblin: Wow, it’s an evocative name.
Will MacAskill: Yeah, and the standard population projections are that the rate of atheism is going to go down over the coming century because atheists aren’t having kids.
Robert Wiblin: Is that even considering conversion or deconversion?
Will MacAskill: Yeah, I mean that was my understanding. I’m not an expert. It’s not considering heritability of religiosity, I think. Obviously these things are also sufficiently nonrobust that —
Robert Wiblin: But it’s an interesting thing that culture could evolve in all kinds of different directions and this is one evident way is that it’s going to evolve in the direction of whatever ideology is correlated with people reproducing.
Will MacAskill: Yeah, and so the thing that’s stark and kind of worrying about this is that you might think that the aim as a longtermist is just to avoid any lock-in events. So you want to avoid AI lock-in, avoid extinction, avoid totalitarianism and then we’ll be okay, cause we’ll figure it out. But maybe that’s just another form of lock-in where you just get this period of cultural evolution over time. I mean it’s not lock-in, but another way of having a preordained outcome, which is those views that are most–
Robert Wiblin: Which I guess is not based on rational reflection, it’s based on evolutionary dynamics, biologically. Yeah. Fascinating. Okay, so that has been a lot of different considerations that we’ve thrown up that hopefully we’ll have to deal with over the next hundred episodes of the 80,000 Hours podcast. I guess there’s too much here for us to really deal with. Setting aside all of those different kind of problem areas that I guess look more important inasmuch as you think the current time is not so remarkable and AI doesn’t stand out so strongly.
Will MacAskill: Although on that I do want to say those are two different kind of arguments. So one is just, if you think AI doesn’t stand out so strongly it’s natural that you’ve got a broader range of concern. With respect to the argument that maybe we’re not in a key time. Sometimes people want to say, “Oh, therefore we should do some broader things”.
Robert Wiblin: Improve governance.
Will MacAskill: Improve governance. I don’t really see why that follows. The thing that follows I think is, “Oh, we should be trying to save for the really influential time”.
Robert Wiblin: And then potentially in the future spend that on governance improvement at the opportune moment.
Will MacAskill: Exactly, yeah. But if it was the case of this great broad existential risk reduction–
Robert Wiblin: Well that makes this a special time.
Will MacAskill: Yeah, exactly.
Robert Wiblin: Yeah. Interesting. Although I suppose you might think that there’s a reason to do that sooner because you benefit every year from improvements in governance from the past or it has some kind of exponential rate of improvement.
Will MacAskill: Yeah. Or if you just think that maybe no century is particularly special, you just get like a little bit of influence over the future every time and you want to just be chipping away at that.
Robert Wiblin: Yeah. Interesting. There’s only so much you can do each century and you just have to like put in the hours to do that and like you just have to do that every time. It doesn’t sound super plausible to me.
Will MacAskill: Well I think I find it more plausible than the idea that just like everything gets decided at one moment.
Implications for the effective altruism community [2:50:03]
Robert Wiblin: Okay, well you temporarily foiled my attempt to move on here. So setting aside all those new problem areas, I guess it should also potentially change quite a bit how we try to shape the effective altruism community and it reduces the sense of urgency that you have to grow really quickly, and I suppose it gives you like a longer term perspective on like how can we make this thing flourish over a century or a millennia or something like that.
Will MacAskill: Yeah. I think that’s right and definitely when I’m thinking about EA movement strategy and what it should be aiming to be and advising CEA on that matter, the things that I really think are just, we should be treating this as a shift from before and it’s not like a startup where you’ve got some kind of growth metric and going as fast as you can, which makes sense if you’re in competition with other things where if you get there a few months earlier depends on whether you win. But instead, we are creating this product or culture that could be very influential for a very long time period, where the other thing to say is that if you look through history or like what things have had influence over… supposing we are trying to have influence in a hundred years time or 200 years time. That’s very hard to do in general. But here’s something you can do which is create a set of ideas that propagate over time. That has a good track record of having a long run influence.
Will MacAskill: So yeah, what are some of the things it means? So one is just getting the culture and the ideas just exactly right where perhaps the idea, cause EA is kind of a natural kind, but it comes along with all sorts of connotations and commitments and so on. And so when we’re thinking about what sort of things is it committed to, we should be judging that in part by the question of well how’s that going to help it grow over the kind of long term. So one thing I care about a lot is being friendly to other value systems, especially very influential value systems rather than being combative towards them. Because if we’re imagining a world where it’s like, “Oh well now EA’s this really big thing”.
Robert Wiblin: You don’t want to be in this like fist fight all the time.
Will MacAskill: Yeah. I don’t want there to be this big battle between environmentalism and EA or other views, especially when it’s like it could go either way. It’s like elements of environmentalism which are like extremely in line with what a typical EA would think and then maybe there’s other elements that are less similar. So which do you choose to emphasize? Similarly, I wrote this on the AMA with climate change where, “What does EA think about climate change?”. Lots of things you could say on that. For some reason it’s been the case that people are like, “Oh, well it’s not as important as AI”. It’s like an odd framing rather than, “Yes, you’ve had this amazing insight that future generations matter. We are taking these actions that are impacting negatively on future generations. This is something that could make the long run future worse for a whole bunch of different reasons. Is it the very most important thing on the margin to be funding?
Robert Wiblin: For one extra person at the beginning of their career.
Will MacAskill: Yeah, exactly. It’s like such a strange thing.
Robert Wiblin: Given that we’re so much more on board with a lot of the principles than almost anyone else.
Will MacAskill: Yeah, exactly. And such that like, yeah, if someone tells me “Hey, I’m going to go and work on climate change”, that’s like amazing. Especially as there’s loads of areas you can have really significant contributions and just when it comes to individuals, the marginal analysis is way less applicable I think.
Robert Wiblin: I guess you also don’t want EA to become synonymous in people’s minds with super parochial issues to do with 2019.
Will MacAskill: Absolutely.
Robert Wiblin: So the whole issue with how do we relate to climate change is just so context specific about like exactly what is money being spent on now and what is really neglected inasmuch as people think, “Oh they’re the people who are like kind of lukewarm on climate change”. That’s like not the fundamental issue at all. That would have been different 30 years ago and it could well be different in 30 years time.
Will MacAskill: Yeah, and that’s a good point that I think this favors cause diversity in EA too where when we’re thinking about longer time frames, it’s much more likely that EA will want to shift quite significantly. At one extreme, imagine if EA was like the AI thing and then it’s very like unlikely to shift. I think if you were to design an EA community that was optimal from the perspective of cause diversity, it would look quite different than it does now. But we’ve already got some amount of cause diversity that sure, has some amount of path dependence, but there’s like an additional benefit of that that might be not so obvious which is like greater ability to kind of morph into focusing on the best things in the future.
Will MacAskill: And similarly there’s a culture which I think is very good of questioning if we’re focusing on the right things. Then also just caring a lot about other EA x-risks. I’ve done at least a bit of reading on previous social movements and things and why they fell apart. One thing that’s just fascinating is how much infighting there always is and how EA, at the moment, is remarkable in how little there is. I mean there was more before but now we’ve settled. But even when it felt like there was a lot before–
Robert Wiblin: I love EA’s! Even the one’s who’re wrong about everything. They’re so much fun to hang out with. I completely agree. I mean if you look at like political movements, so fractious. At the extreme they’re all killing one another, but at least like they kind of hate one another and maybe that’s the median case.
Will MacAskill: Yeah, Neil Bowerman came from the climate change movement before and there were big fights between like people who are more extreme and less.
Robert Wiblin: It’s extraordinary. I guess the equivalent for us is like someone’s kind of a bit snippy in a comment on Facebook.
Will MacAskill: So being able to preserve that is extremely important.
Robert Wiblin: Maybe it’s worth dwelling for a second is why is this? I think one thing is, I suppose, this kind of founding population of EA is in general quite calm. People are like very reflective, not very impulsive, and I think that’s good cause it leads to less conflict.
Will MacAskill: Yeah. I think we have a strong cultural norm of, if I’m like, “Oh I really think this”, you’d treat it as one perspective and that’s very unusual I think. Yeah, also then a culture of openness and intellectual curiosity also means that people don’t want to really stake their claim on some particular idea.
Robert Wiblin: Are we also just nicer somehow? Is there some filter that’s causing people to be more congenial like in other ways? One thing is that in some ways we’re not very diverse, which maybe helps people to get along, if for example, they’ve studied similar topics at university, they just have more in common that helps to bond them that that has a major cost elsewhere.
Will MacAskill: I mean I’m sure that helps.
Robert Wiblin: I guess the thing is if I think about other movements that are very fractious there, I think they’re not more diverse than us.
Will MacAskill: Yeah, I don’t know. The climate change movement is not particularly diverse I don’t think.
Robert Wiblin: And there’s other ways that you’d expect effective altruism to have way more infighting. Cause there’s so many people working on very different problems who should just be competing and gouging one another, trying to grab resources from these other causes they don’t care so much about.
Will MacAskill: Yeah. Here’s a pessimistic account which is that maybe it’s just been boom time. Like we have this wave of being extremely successful, especially in terms of raising funding, which is perhaps the thing that people will fight most about. Supposing that weren’t the case and it was actually quite limited. Perhaps then everyone starts getting more fighty.
Robert Wiblin: Yeah. It doesn’t quite ring true to me.
Will MacAskill: Okay.
Robert Wiblin: It just doesn’t feel like that’s the personality of the people. I guess I feel like if 80,000 Hours was finding it hard to raise funding, would we really start savaging other groups?
Will MacAskill: I think it’s helpful that we’re not activist. There’s a certain activist culture where in fact–
Robert Wiblin: You have to whip yourself into a lather to like be so angry about something.
Will MacAskill: I think there’s more cases insofar as I’ve become just more skeptical of some of the existential risk arguments, I definitely get stressed sometimes talking to people, which kind of makes sense where they’re like, “I think the world is going to end and you are getting in the way of this”. Like if I’m wrong–
Robert Wiblin: God, blood on your hands, Will.
Will MacAskill: Exactly. Yeah. And so I definitely have felt that quite consciously. That creates a tension. And so I think that is quite a natural way of going if you’re working on something that you think is incredibly important–
Robert Wiblin: And someone else is then shrugging their shoulders and saying, “Oh, why bother giving money to this?”.
Will MacAskill: Yeah, exactly. Or even encouraging people to do something else. And so I do think you require quite strong cultural norms and encouragement of that from individuals.
Robert Wiblin: It could also just be that people expect that if they make great arguments then they’re going to persuade people. So that is just the best way to go to raise more money say for your problem area is to make a good case and so just shouting at people and trying to talk them down just isn’t so useful, even from a purely selfish point of view.
Will MacAskill: Yeah. But then we need a set of norms such that that’s the case.
Robert Wiblin: Yeah, I feel we kind of have that.
Will MacAskill: Yeah. I think this just all feels like it could be quite fragile.
Robert Wiblin: Yeah. I suppose I’m often more keen for effective altruism to be involved in politics and perhaps to have a bit more of an activist edge, but I suppose this should give me pause that like even if we get some big wins in the next decade, like the cultural implications it could have could be pretty bad.
Will MacAskill: Yeah, for sure. Especially if it was aligned with a political party that could be very bad. I mean the fact that as is common in EA, people often look at the neoliberals as this very fascinating example. The fact that they were heavily advising Pinochet was potentially just strategically an extremely bad move because it’s like, well now they’re the people who are like happy to advise dictators.
Robert Wiblin: Yeah. Just to be clear, talking about neoliberalism in a very different way than the neoliberal podcasts that I went on, like this new wave of neoliberalism. Well I guess it’s like half different.
Will MacAskill: You could go on a digression about why have they chosen this name to refer to this different thing. Liberalism already like means like four different things and now you’ve added like another variant.
Robert Wiblin: Yeah. It wasn’t my choice.
Will MacAskill: I know. I’m aware.
Robert Wiblin: Yeah. I suppose it’s a word that a lot of people use so you can kind of grab it and like seemingly have like a lot of mindshare with people and there’s also no one else is calling themselves neoliberal to mean the other thing, so it’s like actually kind of easy to take over the word, but it does seem like it has some massive costs.
Will MacAskill: Yeah. Well on a separate note, a different implication of having a long term view on EA is important of academia as well. It’s obviously something I’m close to. Academia is very slow but it means it’s very long lasting. So my undergraduate students still read Peter Singer’s ‘Famine, Affluence and Morality’, which was published 50 years ago now. They still read Mill’s ‘Utilitarianism’ that was published two centuries ago. So it’s very slow to change, but that also means it’s very long lasting in terms of–
Robert Wiblin: Very long lasting because it doesn’t make impulsive mistakes. It doesn’t do something really stupid.
Will MacAskill: Well I just think different social systems have different rates of change built into them. So companies grow very quickly but also die very quickly. Academia is just slow moving in terms of like the norms, what things get accepted and so on.
Robert Wiblin: I guess I’m wondering what’s the causal pathway there to surviving a long time. I guess there’s some intuitive sense in which those things are related. Although I wonder whether it goes the other way. There are things that last a long time tend to then have a longer term perspective and cause they had more history so there’s a lot of things to read.
Will MacAskill: Oh, I was just meaning the benefits of influencing academia. If you only care about the next 10 years, let’s say. It’s very small.
Robert Wiblin: Right, yes.
Will MacAskill: Whereas if you care about the next two centuries, then it’s like, maybe you could write a book. Like it’s relevant for you writing a book. Do you want to get something that could hit the bestsellers list or do you want something that could be go on the course list course and could stay on the course list perhaps for a hundred years or something.
Robert Wiblin: Yeah, interesting. Okay, so your point was that academia changes very gradually, but I guess that means it change is quite persistent.
Will MacAskill: Exactly, yeah.
Robert Wiblin: And I suppose if you want to have a short term impact, trying to change academia is a fool’s errand, but if you want to have impact over centuries, then influencing academia seems like a good idea.
Will MacAskill: Yeah, exactly.
Robert Wiblin: Yeah. Are there any other implications that this has for the strategy for effective altruism as a kind of social group that you haven’t raised already?
Will MacAskill: I think maybe higher priority on research. I think again, there was a kind of feeling maybe a couple of years ago, which perhaps no one would have ever endorsed, but again, perhaps was a feeling in the air, which was like, “Oh well we’ve now figured stuff out”.
Robert Wiblin: Did anyone think that? Okay, wow. I guess I didn’t think that, but maybe I thought we’d figured out the bigger picture and then it was now filling in the specifics.
Will MacAskill: I definitely felt like there was a gear shift from an atmosphere of we’re really trying to do research and understand. Like we’re super uncertain and what should we focus on, to instead like, “Okay, we need to go really hard on, I mean in particular, short timelines for AI. This sometimes gets described as the great short timeline scare of 2017.
Robert Wiblin: I haven’t heard that before, yeah. Is 80,000 Hours… I guess we’ve gotten into AI but I suppose maybe we didn’t get into it quite as aggressively and now we’re not getting out of it quite as aggressively.
Will MacAskill: I mean there was just definitely… 2017 there was a period where some people were making quite strong claims. Even saying like five year timelines, and a lot of people suddenly kind of woke up and got quite–
Robert Wiblin: I feel inasmuch as that’s wrong, we’re kind of saved by just the fact that it’s hard to shift gear a lot. So it’s like plans tend to have a bit of like–
Will MacAskill: I think that’s often good. Yeah, I mean I think those five year predictions aren’t looking so great two years on.
Robert Wiblin: Egg on our face maybe in three years time.
Will MacAskill: Exactly, maybe next year it all happens. Yeah. If again you have kind of my view on the current kind of state of play, I think I would want just more investment and investigation into… So take great power war or something. I’m like great power wars are really important? We should be concerned about it. People normally say, “Oh, but what would we do?”. And I’m kinda like, I don’t know. I mean policy around hypersonic missiles is like one thing, but really I don’t know. We should be looking into it. And then people are like, “Well, I just don’t really know”. And so don’t feel excited about it. But I think that’s evidence of why diminishing marginal returns is not exactly correct. It’s actually an S curve. I think if there’d never been any like investment and discussion about AI and now suddenly we’re like, “Oh my God, AI’s this big thing”. They wouldn’t know what to do on Earth about this.
Will MacAskill: So there’s an initial period of where you’re getting increasing returns where you’re just actually figuring out like where you can contribute. And that’s interesting if you get that increasing returns dynamic because it means that you don’t want really spread, even if it’s the case that–
Robert Wiblin: It’s a reason to group a little bit more.
Will MacAskill: Exactly. Yeah. And so I mean a couple of reasons which kind of favor AI work over these other things that maybe I think are just as important in the grand scheme of things is we’ve already done all the sunk cost of building kind of infrastructure to have an impact there. And then secondly, when you combine with the fact that just entirely objectively it’s boom time in AI. So if there’s any time that we’re going to focus on it, it’s when there’s vast increases in inputs. And so perhaps it is the case that maybe my conclusion is perhaps I’m just as worried about war or genetic enhancement or something, but while we’ve made the bet, we should follow through with it. But overall I still actually would be pretty pro people doing some significant research into other potential top causes and then figuring out what should the next thing that we focus quite heavily on be
Robert Wiblin: I guess especially people who haven’t already committed to working on some other area if they’re still very flexible. For example, maybe they should go and think about great power conflict if you’re still an undergraduate student.
Will MacAskill: Yeah, for sure and then especially different causes. One issue that we’ve found is that we’re talking so much about biorisk and AI risk and they’re just quite weird small causes that can’t necessarily absorb large numbers of people perhaps who don’t have… Like I couldn’t contribute to biorisk work, nor do I have a machine learning background and so on, whereas some other causes like climate change and great power war potentially can absorb just much larger quantities of people and that could be a strong reason for looking into them more too.
Culture of the effective altruism community [3:06:28]
Robert Wiblin: I guess in terms of the culture that we build of effective altruism being really important, this question of should we be really nice to one another, preventing people from burning out and encouraging more people to join cause it’s like not this incredibly unpleasant argumentative landscape. Do you think that we’re nice enough as it is, cause there are considerations on the other end that sometimes to really persuade people it can be helpful to be like very forceful and belligerent and like some of the most influential individuals in history seemed like they were not always the nicest people to work with. So yeah. Where do you stand on that kind of culture of EA?
Will MacAskill: Yeah, I think to distinguish between the two, there’s kind of intellectual niceness and then kind of activist niceness; I’m pro niceness in both cases. So like on the intellectual side, I think EA can just be quite a stressful place. So like I made this commitment because I wanted to start actually publishing some stuff to write on the forum. I think after my first post, or my first real post which I think was age-weighted voting, I had an anxiety dream every evening. Like every night. Where I would wake up and my dreams would be the most literal anxiety dreams you could imagine, which are like people talking and people being like, “Yeah, we lost all the respect for you after you wrote that post”.
Robert Wiblin: That’s incredible, wow!
Will MacAskill: Yeah and I’d wake up and that’s for a whole week.
Robert Wiblin: That is so dark.
Will MacAskill: I know. And if this is like how I feel and similarly–
Robert Wiblin: How does it feel as a new member of this forum if you write something that has a mistake in it.
Will MacAskill: Exactly. Yeah. And then certainly in the stuff which is like being more skeptical of AI or existential risk. And it’s like, there are just people who are smarter than me and who are better informed than me and it’s very stressful to disagree with them. And then on the forum then you’ve no longer got the benefits of being able to see someone’s body language and so on which obviously are often kind of softening. And then also like the upvote/downvoting, it’s like, “Well I think… BOOO!” Having conversations with people booing and cheering.
Robert Wiblin: Yeah, what is this like, the Colosseum? It’s like — gonna release the lions on you?
Will MacAskill: Yeah, exactly. So that made me quite worried that if I feel like that, how do a wider portfolio of people feel? And then I also have experienced this in philosophy as well, where the difference between typical culture and analytic philosophy and say at GPI, is just so stark, where typical analytic philosophy, it’s like the put-down and–
Robert Wiblin: Crush a butterfly on a wheel.
Will MacAskill: Exactly. Can you define a concept? Like if you can’t do a necessary and sufficient conditions of what a concept is, you’ll get kind of sneered at. But then GPI is just extremely cooperative, yet extremely frank, extremely honest. Like Hilary will say to me, “You know, I think this is a bad paper”, or just “I think this idea does not make any sense”. Or just, I don’t know, just the most blunt thing and I just feel empowered by it cause I’m like, “Wow that’s helped, thank you for that feedback that’s just reflecting reality and and trying to help me”.
Robert Wiblin: Not trying to put you down so that Hilary can feel really big.
Will MacAskill: Exactly. Yeah. It’s all coming from a place of love, and that’s just super productive. Like that cooperative yet kind of honest environment is just so powerful for making intellectual progress and the challenges when you’re interacting, especially online are just, well, yeah, even in person, it’s just a real challenge to try and build that culture, but it’s super important. Because it’s kind of like a public good. Maybe any individual can look smarter by being a bit douchier, so it’s easier to break down. But I think we want to try and enforce it very hard. And then online it gets just doubly hard because you’re able to convey so much less nuance. And so that does mean I think we should just try really hard to try and be as nice as possible.
Will MacAskill: Brian Tomasik is probably the extreme paradigm.
Robert Wiblin: Oh yeah, incredible! Absolute legend. Just a unit of niceness.
Will MacAskill: I feel like people could just write this scathing critique, like a hit piece and there would just be like a single ‘like’ that would be Brian Tomasik.
Robert Wiblin: “Thanks for this valuable information that you have gone out of your way to provide to me to improve my views”.
Will MacAskill: But other people demonstrate it too, Jeff Kaufman and so on. So yes, I’m really pro that. The key thing is I just don’t think there’s anything lost here. I think people worry that, “Oh well if we’re too nice then we won’t be a real community making intellectual progress”. And I just don’t think there’s a trade-off here. I think there’s a trade-off where it costs you a bit of time, but I think that’s easily worth it.
Robert Wiblin: I’m not even sure that’s always true. Sometimes just like wording something in such a way that it’s incredibly cutting is a bunch of extra work.
Will MacAskill: Now I know how you spend your time, Rob.
Robert Wiblin: It’s how I used to spend my time, Will.
Will MacAskill: I don’t think I’ve actually had a period in my life where I’ve tried to do that.
Robert Wiblin: Oh, you’ve never replied to someone and thought, “Oh, how can I frame this in a way that looks particularly good?”.
Will MacAskill: I do admit that one of my favorite essay forms is the scathing review of a book.
Robert Wiblin: It’s such a guilty pleasure for me.
Will MacAskill: And at some point in my life I want to do that.
Robert Wiblin: I used to just stay awake for like hours at a night just reading like the most scathing film reviews, there was an author who just hated a lot of films. But that’s a dark side of my personality I guess.
Will MacAskill: If anyone wants, one of my favorite essays is a review of Colin McGinn’s book, ‘The Meaning of Disgust’ by Professor Nina Strohminger and it’s so good.
Robert Wiblin: We’ll stick up a link for you, listeners.
Will MacAskill: Oh and Geoffrey Pullum, “50 Years of Bad Grammar Advice”.
Robert Wiblin: Oh, that’s about Strunk and White? Yeah, it’s excellent. You can see inside Will MacAskill’s dark heart everyone.
Will MacAskill: Yeah, absolutely, that’s my guilty pleasure. But we shouldn’t do that.
Robert Wiblin: We shouldn’t be like that.
Will MacAskill: That’s not the way for EA to go.
Robert Wiblin: Yeah. So on the EA forum, I mean, I am like incredibly jaded and thick-skinned I think, cause it’s just from like having been for 12 years in just an online knife fight all the time, and when people are rude online I’m just like, “Whatever” and then I block them and then move on. So the EA forum, I think, in a technical sense, it’s like people are quite polite. You certainly wouldn’t get upvoted for just being outright rude and saying that someone sucks or something like that in a way that you might get really low quality comments on Facebook. But then in terms of intellectual arguments, very often people just correct you very bluntly and there’s no sense of “Thanks for writing this post, here’s a way that it could be better”. Like we’re working together towards a common goal. That’s something that’s sometimes missing. It just feels like to be corrected is like such a status loss and so for example, like I don’t really comment; I don’t really write that much on the EA forum and I wouldn’t want to in part for this reason, which I think is like not ideal.
Will MacAskill: Yeah, I think that’s the same with me. Like I had to set up a commitment mechanism for me to be posting at all. Even though I had stuff, I was like “Well, I want to be getting it out”. And then even with that commitment mechanism, I didn’t produce as much as I was planning. And I think it is that like, even though I do think we’re way better than the normal online community, which is a very low bar (certainly a normal web forum), it’s still just like this morass and I do think you need to go the extra mile in terms of just being really supportive.
Robert Wiblin: And appreciating the contributions that people make. Even if they write a terrible post, they put time into like try to provide information voluntarily to you for no money and you chose to read it to like try to learn something.
Will MacAskill: Exactly, and everyone starts off terribly. Like listen to the early Beatles. It’s mad. Read early GiveWell’s posts. Think what Holden and Elie became and then read their early posts and it’s really bad. We should be encouraging and supportive. The other side of being nice is then with respect to level of commitment and so on as well, where there I just think again, this has been pretty notable from the kind of anonymous advice that 80K has been getting, where there’s a lot of different ideas in there, but there’s definitely a theme of anti-burnout stuff and I feel like we’re an extremely selective group for people who are scrupulous and worried about not doing enough and so on. And so I don’t think on the margin that we need to be pushing more on that and instead pushing–
Robert Wiblin: Towards sustainable levels and people having good lives, all things considered.
Will MacAskill: Yeah, and it’s actually possible to achieve that. Like I feel over the course of the last, let’s say three years, I have actually gotten to a state where I’m happy to think in a way that is asking the question, “What is a year for example, that I could just not just eke through for the next 40 years, but happily have for the next 40 years”. Or to think, “Well, 10% of the weeks this year are going to be for me” and I don’t have to think about this in instrumental terms”. Actually, I still even feel nervous saying that, but I’m definitely getting more to the stage where it’s like, yeah, I took six weeks of holiday over the last year. I didn’t work. It was great. Really good.
Robert Wiblin: Yeah. I love holidays.
Will MacAskill: Yeah, and that was still just a small portion. I mean it’s definitely reassuring. It’s actually quite unclear to me whether that’s increasing or decreasing my overall productivity. But I feel like I’ve at least made a lot of progress in getting to the stage where I do feel okay to be doing something without any kind of instrumental aim for the greater good. Instead it’s like, “Well this is where I want to do”. So I think we can get to that stage.
Robert Wiblin: Yeah, you’re a much more scrupulous person than me. I guess actually, I used to feel guilty about this 10 years ago and I think just over time it’s like become much less salient to me. I think also starting to take antidepressants, I just don’t ruminate or feel guilty nearly as much as I used to.
Will MacAskill: Yeah, interesting.
Robert Wiblin: Yeah, it’s interesting. Sometimes people when they tell me that they’ve taken holidays or something, I get the sense that they feel ashamed about it. They feel like I’m going to be judging them.
Will MacAskill: Definitely some people feel like that.
Robert Wiblin: Yeah. I’m just like, “Hope you had a great time on your holidays”. Like I’m a little bit jealous.
Will MacAskill: Actually yeah, there’s also a wider thing I really want to emphasize, which is that I think a lot of people feel guilty about not achieving enough or doing enough. Like again, I feel like this. There’s a big part of my brain that’s dedicated to being, “Well, you’re not as productive as Holden. You’re not as smart as Hilary. You’re not as creative as Toby”, and again, we really need to try and avoid that kind of comparative behavior, like comparative thought, especially if, as is happening, more and more we want people to pursue career paths that are kind of high risk in the sense that, “Well, you’re going to pursue this policy path or this research path and probably it’s not going to do anything but some chance it has like this really big contribution”. If instead you’re in this mindset of “Well, I really need to be proving myself”.
Robert Wiblin: Yeah, it’s going to make people really risk averse. People won’t respect them if their thing doesn’t pan out. The ideal case for me is like 90% of careers don’t pan out. They shouldn’t feel bad after the fact. That’s what I wanted them to do in the first place.
Will MacAskill: Yeah. And I think that is a bit of a cultural innovation that we’re going to have to try and grapple with because in the past it’s almost been that we’ve pushed against the idea of “following your passion” whereas that is at least a guard against… You can pursue this thing as like, “Oh yeah, I just love being a researcher or something”. You actually do want that. That allows you to pursue these careers that are much higher risk.
Robert Wiblin: Yeah. I get the sense that people think that other people are judging them morally much more than they actually are. I just don’t spend time thinking about whether other people are meeting their moral duties. Do they have any idea how much time I waste and just spend on my own fun things? There’s not enough hours in the day to be worrying about other people’s ethics and to be honest, it’s just not a thing. But everyone is so insecure or so nervous.
Will MacAskill: Yeah, no, absolutely. I mean, I imagine if I treated someone else the way I treat myself in my brain, I’d be a terrible person. That’d be awful! If someone came to me feeling stressed or anxious or something, I’d be compassionate to that person, and often that part of my brain just has such a double standard. Like I would never judge someone else for not being smart enough or not being as productive as the most productive person I know. I mean it’d be insane for me to do that.
Robert Wiblin: There’s this whole other line of conversation about people judging — yeah, people looking at like the most productive or smartest person they could find and then like feeling bad that they are not the smartest person in the world. That just commits 99.9 plus percent of the world to feeling bad about themselves. It’s like how the hell does that help anyone or make any sense philosophically or otherwise. It’s like just feel fine with whatever level of ability you were born with and like try to make the most of that. If you’re feeling guilty about characteristics you were born with, like I was born not tall enough, but now I should feel ashamed. It’s crazy.
Will MacAskill: It’s crazy. So one thing I found on that particular thing of comparison which is definitely something my brain does a lot is, I don’t think I ever really have succeeded at, “Well, you should just be happy wherever you are”. The thing that I have found help and it’s kind of just accepting the fact that maybe this part of my brain is petty, is rather than looking at each person and I’ll identify a trait that they’re better than me at. I’ll try and figure out what am I better than them at? Like, “Well I’m funnier than this person”, and then, “I’m taller”, I don’t know. I don’t think taller. I don’t think of those examples but there’s always going to be some way in which you’re superior to this other person and that satisfies that petty little part of your brain that really is comparing yourself to everyone all the time.
Robert Wiblin: Yeah. It’s interesting. I think in terms of getting people to get along and not fight and cooperate and have a good time. I guess people talk about all of these formal mechanisms like, “Oh we can have agreements between groups and like all these cultural norms”. I think sometimes I just want to suggest how about we just use the inbuilt way that humans have always tried to learn to get along with one another, which is hanging out and actually being friends and enjoying one another’s company.
Robert Wiblin: There’s people out there who are doing projects that I either think are useless or actively harmful, but I’m just friends with them because I love their company. I want to hang out with them cause they’re nice people and the fact that I don’t like think that their work is so useful is just neither here nor there. And I think we can make a lot of use of these inbuilt evolved mechanisms for having people build teams.
Will MacAskill: Yeah, I think it’s notable that back in 2012/2013 kind of era when there was more tension between different groups in EA, it was so geographically split. GiveWell were in New York, Giving What We Can were in Oxford and that meant we didn’t talk at all. That meant there was tons of misconceptions about what the other people thought. And similarly for other groups. Whereas one thing that has been notable is that now in EA there’s like a lot of movement. People go around a lot.
Robert Wiblin: Between the different hubs.
Will MacAskill: Yeah, exactly. I think you’re right. That helps a lot. It’s very hard to feel very angry at someone when you–
Robert Wiblin: Just enjoy having a beer with them. Yeah. There’s this funny phenomenon where if people don’t talk for awhile, like months or years where they just happen not to interact, it feels like their relationship worsens or it’s like there’s all of these misunderstandings that build up where they’re like, “Why the hell are they doing that?”. It’s like, that doesn’t make any sense to me and they get kind of annoyed with one another and it seems you just kind of have to keep talking to one another. Like unfortunately investing some time in communicating in order to prevent those frictions from building up. I guess that’s especially the case if you’re working in kind of a related area. I think that the mechanism that this happens with is you see what other people are doing but you never hear the the real reason and so it just seems to you like people are making all of these mistakes cause you don’t understand what’s the motivation for it, and so you just gradually respect them less. Of course this is irrational, you should be able to adjust for it but I think it’s not so easy to in practice.
Will MacAskill: Yeah, I think that’s probably right.
Robert Wiblin: Alright. We should probably wrap up because you’re looking extremely tired and this has been a bit of a Bataan Death March. We’re very much at the end of the day here.
Will MacAskill: I was a bit tired when I started this conversation.
Robert Wiblin: Maybe we’re getting very authentic Will cause you’re losing self control due to tiredness.
Will MacAskill: Yeah, I’m worried. You don’t want to see that.
Robert Wiblin: Yeah. I guess, final question. Your views have been evolving a whole lot over the last couple of years. You’ve been developing these dangerous heretical views that are different than my own, Will. We used to agree on everything.
Will MacAskill: I know, it’s crazy, isn’t it?
Robert Wiblin: But no longer now. Are there any books you can recommend that I think might fill in gaps in understanding how your opinions have shifted over the last few years?
Will MacAskill: Yeah, absolutely. There’s a few areas I’d recommend. One partly just cause it’s such a great book. It’s like the best popular nonfiction I’ve read or among that, is ‘The Secret of Our Success’ by Joseph Henrich, which is about cultural evolution and arguing that it’s because of cultural transmission and learning that humans are so powerful and the kind of large brain size is more of a consequence of that rather than the other way around. And it’s full of great stories of individual humans with big brains completely failing at achieving any of their aims because they don’t understand the relevant cultural context. So people exploring the new world and just dying out because they had no idea how to cultivate the local crops with anything. So that’s one.
Will MacAskill: ‘Foragers, Farmers, and Fossil Fuels’ by Ian Morris which I mentioned earlier. Again, it’s a cultural evolution thing. ‘The Rise and Fall of American Growth’ by Robert Gordon. A very powerful account of just how much technological change there was in the period, let’s say 1870 to 1950 and just how technological change since 1970 has looked pretty slow pace by that comparison and with some kind of reasons for thinking that economic growth might continue to be relatively slow for the coming years. That’s also been quite influential.
Robert Wiblin: Well this has been maybe more of a meandering conversation than I initially anticipated it being, but I think we’ve kind of set an agenda for a whole lot of other topics that we need to discuss a lot more in the coming years.
Will MacAskill: Absolutely.
Robert Wiblin: I’m sure this will not be the last time that you come on the show, so we have plenty more to talk about.
Will MacAskill: Probably not, but thanks for having me on Rob.
Robert Wiblin: My guest today has been Will MacAskill. Thanks for coming on the show, Will.
Will MacAskill: Thank you.
Rob’s outro [3:25:25]
Alright, just a reminder to go fill out the expression of interest form for people who want to build links to the Global Priorities Institute at https://globalprioritiesinstitute.org/opportunities/ .
And as I mentioned at the start of the show, you can find a broader list of problems that we haven’t discussed much on the show, but which may be as pressing as our highest-priority problem areas on our website at 80000hours.org/problem-profiles/
The 80,000 Hours Podcast is produced by Keiran Harris. Audio mastering by Ben Cordell, and transcriptions by Zakee Ulhaq.
Thanks for joining, talk to you in a week or two.