#42 – Amanda Askell on tackling the ethics of infinity, being clueless about the effects of our actions, and having moral empathy for intellectual adversaries

Consider two familiar moments at a family reunion.

Our host, Uncle Bill, is taking pride in his barbequing skills. But his niece Becky says that she now refuses to eat meat. A groan goes round the table; the family mostly think of this as an annoying picky preference. But were it viewed as a moral position rather than personal preference – as they might if instead Becky were avoiding meat on religious grounds – it would usually receive a very different reaction.

An hour later Bill expresses a strong objection to abortion. Again, a groan goes round the table: the family mostly think that he has no business in trying to foist his regressive preferences on other people’s personal lives. But if considered not as a matter of personal taste, but rather as a moral position – that Bill genuinely believes he’s opposing mass-murder – his comment might start a serious conversation.

Amanda Askell, who recently completed a PhD in philosophy at NYU focused on the ethics of infinity, thinks that we often betray a complete lack of moral empathy. Across the political spectrum, we’re unable to get inside the mindset of people who expresses views that we disagree with, and see the issue from their point of view.

A common cause of conflict, as above, is confusion between personal preferences and moral positions. Assuming good faith on the part of the person you disagree with, and actually engaging with the beliefs they claim to hold, is perhaps the best remedy for our inability to make progress on controversial issues.

One seeming path to progress involves contraception. A lot of people who are anti-abortion are also anti-contraception. But they’ll usually think that abortion is much worse than contraception – so why can’t we compromise and agree to have much more contraception available?

According to Amanda, a charitable explanation is that people who are anti-abortion and anti-contraception engage in moral reasoning and advocacy based on what, in their minds, is the best of all possible worlds: one where people neither use contraception nor get abortions.

So instead of arguing about abortion and contraception, we could discuss the underlying principle that one should advocate for the best possible world, rather than the best probable world. Successfully break down such ethical beliefs, absent political toxicity, and it might be possible to actually figure out why we disagree and perhaps even converge on agreement.

Today’s episode blends such practical topics with cutting-edge philosophy. We cover:

  • The problem of ‘moral cluelessness’ – our inability to predict the consequences of our actions – and how we might work around it
  • Amanda’s biggest criticisms of social justice activists, and of critics of social justice activists
  • Is there an ethical difference between prison and corporal punishment? Are both or neither justified?
  • How to resolve ‘infinitarian paralysis’ – the inability to make decisions when infinities get involved.
  • What’s effective altruism doing wrong?
  • How should we think about jargon? Are a lot of people who don’t communicate clearly just trying to scam us?
  • How can people be more successful while they remain within the cocoon of school and university?
  • How did Amanda find her philosophy PhD, and how will she decide what to do now?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours podcast is produced by Keiran Harris.

Highlights

…I often think that we should have norms where if you don’t understand people relatively quickly, you’re not required to continue to engage. It’s the job of communicators to clearly tell you what they mean. And if they feel like it’s your job to-

Robert Wiblin: They impose such large demands on other people.

Amanda Askell: Yeah. … if you communicate in a way that’s ambiguous or that uses a lot of jargon, what you do is you force people to spend a lot of time thinking about what you might mean. If they’re smart and conscientious reader, they’re going to be charitable and they’re going to attribute the most generous interpretation to you.

And this is actually really bad because it can mean that … ambiguous communication can actually be really attractive to people who are excited about generating interpretations of texts. And so you can end up having these really perverse incentives to not be clear. …

… there are norms in philosophy. They’re not always followed, but it’s one thing that I always liked about the discipline is you’re told to always just basically state the thing that you mean to state as clearly as possible. And I think that’s like a norm that I live by. And I also think that people appreciate when reading.

Robert Wiblin: Yeah, this is getting close to a hobby horse of mine. I’m quite an extremist on this communication issue. When I notice people who I think are being vague or obscurantist – that they’re not communicating as clearly as they could – my baseline assumption is that they’re pulling a scam. They’re pulling the scam where they’re expecting other people to do the work for them and they’re trying to cover up weaknesses in what they’re saying by not being clear.

Maybe that’s too cynical. Maybe that’s too harsh and interpretation. We were saying we should be charitable to other people but honestly my experience very often just has been even after looking into it more, that has been my conclusion that especially people who can’t express things clearly but claim that they have some extremely clear idea of what you’re trying to say. I feel that they’re just pulling a con.

I think the reason why these questions are important is because they demonstrate inconsistencies with fundamental ethical principles. And those inconsistencies arise and generate problems even if you’re merely uncertain about whether the world is like this. And the fact that the world could in fact be like this means that I think we should find these conflicts between fundamental ethical axioms quite troubling. Because you’re going to have to give up one of those axioms then. And that’s going to have ramifications on your ethical theory, presumably also in finite cases. Like if you rejected the Pareto principle, that could have a huge effect on which ethical principles you think are true in the finite case.

But, I do have sympathy for the concern. I don’t think that this question is an urgent one, for example, and so I don’t think that people should necessarily be pouring all of their time into it. I think it could be important because I think that ethics is important and this generates really important problems for ethics. But I don’t necessarily think it’s urgent.

And so I think that one thing that people might be inclined to say is, “Oh this is just so abstract and just doesn’t matter.” I tend to think it does matter, but the thing maybe that you’re picking up on is it’s possibly not urgent that we solve this problem. And I think that’s probably correct.

If you think that there are really important unresolved problems, then things that give you the space at some point in the future to research the stuff can be more important.

These issues might not be urgent, but at some point it would be really nice to work through and resolve all of them and so you want to make sure that you leave space for that and you don’t commit to one theory being true in this case. And I think that the lessons of impossibility theorems and ethics that are important are mainly that ethics is hard and that you shouldn’t act as if one ethical theory or principle or set of principles is definitely true because there are a lot of inconsistencies between really plausible ones. And so I think that’s a more general principle that one should live by, and maybe these impossibility results just kind of strengthen that.

I’ve talked a little bit about moral value of information in the past and I think that the main thing that I kind of concluded from it was that it’s very easy to take this kind of evidence based mindset when it comes to doing the most good. We are like, let’s just take these interventions, for which I have the most evidence about what the nature of their impact is, and let’s just invest in those or you can take a kind of more expectations-based kind of approach where you say, “Well, actually, what we should do is we should run some experiments and we should try out various things and see if they work because we just don’t have a huge amount of information in this domain.”

And if you take that kind of attitude, you can end up, kind of, investing in things a bit more experimentally and I think that there’s potentially a better case to be made for that than people have appreciated and so that might be one consequence of this is just, “Hey, the ethical value of information is actually higher than we thought and maybe we should just be trying to find new ways of gaining a bunch of new information about how we can do good.

We often seem to betray a kind of complete lack of what I call moral empathy, where moral empathy is trying to get inside the mindset of someone who expresses views that we disagree with and see that from their point of view, what they’re talking about is a moral issue and not merely a preference. The first example is vegetarianism where you’ll sometimes see people basically get very annoyed, say, with their vegetarian family member because the person doesn’t want to eat meat at a family gathering or something like that. I think the example I give is, this makes sense if you just think of vegetarianism like a preference.

It’s just like, “Oh, they’re being awkward. They just have this random preference that they want me to try and accommodate.” It’s much less acceptable if you think of it as a moral view. You see this where people are a bit more respectful of religious views. So if someone eats halal, I think that it would be seen as unacceptable to … people wouldn’t have the same attitude of, oh, how annoying and how terrible of them.

…I find people in conversation much more happy and just much more willing to discuss with you if you show that you actually have cared enough to go away and research their worldview and you might be like, “Look, I looked into your worldview and I don’t agree with it, but I’ll demonstrate to you that I understand it.” It just makes for a much more friendly discussion basically because it shows that you’re not like, “I don’t even need to look at the things that you’ve been raised with or understood or researched. I just know better without even looking at them.

Articles, books, and other media discussed in the show

Latest 80,000 Hours articles:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.