#17 – Will MacAskill fears our descendants will probably see us as moral monsters. What should we do about that?

Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that illegitimate children should receive fewer legal protections, and that there was a ranking in the moral worth of different races.

Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed?

If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide.

Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism community. In this interview we discuss a wide range of topics:

  • How would we go about a ‘long reflection’ to fix our moral errors?
  • Will’s forthcoming book on how one should reason and act if you don’t know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’?
  • If we basically solve existential risks, what does humanity do next?
  • What are some of Will’s most unusual philosophical positions?
  • What are the best arguments for and against utilitarianism?
  • Given disagreements among philosophers, how much should we believe the findings of philosophy as a field?
  • What are some the biases we should be aware of within academia?
  • What are some of the downsides of becoming a professor?
  • What are the merits of becoming a philosopher?
  • How does the media image of EA differ to the actual goals of the community?
  • What kinds of things would you like to see the EA community do differently?
  • How much should we explore potentially controversial ideas?
  • How focused should we be on diversity?
  • What are the best arguments against effective altruism?

Keiran Harris helped produce today’s episode.

Highlights

We make decisions under empirical uncertainty all the time. And there’s been decades of research on how you ought to make those decisions. The standard view is to use expected utility reasoning or expected value reasoning, which is where you look at the probability of different outcomes and the value that would obtain, given those outcomes, all dependent on which action you choose, then you take the sum product and you choose the action with the highest expected value. That sounds all kind of abstract and mathematical, but the core idea is very simple. If I give you a beer, and you think 99% likely that beer is going to be delicious, give you a little bit happiness. But, there’s a 1 in 100 chance that it will kill you because I’ve poisoned it. Then it would seem like it’s irrational for you to drink the beer. Because even though there’s a 99% chance of a slightly good outcome, there’s a 1 in 100 chance of an extremely bad outcome. In fact, that outcome’s 100 times worse than the pleasure of the beer is good. At least.

In which case the action with greater expected value is to not drink the beer. We think about this under empirical uncertainty all the time. We look at both the probability of different outcomes and how good or bad those outcomes would be. But then when you look at people’s moral reasoning, it seems like very often people reason in a very different way.

When you look at scientific theories, you decide whether they’re good or not [largely] by the predictions they made. We’ve got a much smaller sample size, but you can do it to some extent with moral theories as well. For example, we can look at the predictions, the bold claims that were going against common sense at the time, that Bentham and Mill made. Compare it to the predictions, bold moral claims, that Kant made.

When you look at Bentham and Mill they were extremely progressive. They campaigned and argued for women’s right to vote and the importance of women getting a good education. They were very positive on sexual liberal attitudes. In fact, some of Bentham’s writings on the topic were so controversial that they weren’t even published 200 years later.

Different people have different sets of values. They might have very different views for what an optimal future looks like. What we really want ideally is a convergent goal between different sorts of values so that we can all say, “Look, this is the thing that we’re all getting behind that we’re trying to ensure that humanity…” Kind of like this is the purpose of civilization. The issue, if you think about purpose of civilization, is just so much disagreement. Maybe there’s something we can aim for that all sorts of different value systems will agree is good. Then, that means we can really get coordination in aiming for that.

I think there is an answer. I call it the long reflection, which is you get to a state where existential risks or extinction risks have been reduced to basically zero. It’s also a position of far greater technological power than we have now, such that we have basically vast intelligence compared to what we have now, amazing empirical understanding of the world, and secondly tens of thousands of years to not really do anything with respect to moving to the stars or really trying to actually build civilization in one particular way, but instead just to engage in this research project of what actually is a value. What actually is the meaning of life? And have, maybe it’s 10 billion people, debating and working on these issues for 10,000 years because the importance is just so great. Humanity, or post-humanity, may be around for billions of years. In which case spending a mere 10,000 is actually absolutely nothing.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.